May 17 00:36:05.019719 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:36:05.019744 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:36:05.019755 kernel: BIOS-provided physical RAM map: May 17 00:36:05.019762 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:36:05.019776 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 17 00:36:05.019783 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 17 00:36:05.019794 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 17 00:36:05.019803 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 17 00:36:05.019809 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 17 00:36:05.019815 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 17 00:36:05.019824 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 17 00:36:05.019830 kernel: printk: bootconsole [earlyser0] enabled May 17 00:36:05.019838 kernel: NX (Execute Disable) protection: active May 17 00:36:05.019844 kernel: efi: EFI v2.70 by Microsoft May 17 00:36:05.019858 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 May 17 00:36:05.019869 kernel: random: crng init done May 17 00:36:05.019876 kernel: SMBIOS 3.1.0 present. May 17 00:36:05.019883 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 17 00:36:05.019892 kernel: Hypervisor detected: Microsoft Hyper-V May 17 00:36:05.019899 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 17 00:36:05.019908 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 May 17 00:36:05.019914 kernel: Hyper-V: Nested features: 0x1e0101 May 17 00:36:05.019922 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 17 00:36:05.019929 kernel: Hyper-V: Using hypercall for remote TLB flush May 17 00:36:05.019935 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:36:05.019943 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 17 00:36:05.019951 kernel: tsc: Detected 2593.905 MHz processor May 17 00:36:05.019960 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:36:05.019967 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:36:05.019974 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 17 00:36:05.019980 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:36:05.019989 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 17 00:36:05.019999 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 17 00:36:05.020007 kernel: Using GB pages for direct mapping May 17 00:36:05.020014 kernel: Secure boot disabled May 17 00:36:05.020022 kernel: ACPI: Early table checksum verification disabled May 17 00:36:05.020030 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 17 00:36:05.020037 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020046 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020052 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 17 00:36:05.020067 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 17 00:36:05.020076 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020084 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020091 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020098 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020107 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020118 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020127 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:36:05.020134 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 17 00:36:05.020145 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 17 00:36:05.020152 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 17 00:36:05.020161 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 17 00:36:05.020168 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 17 00:36:05.020179 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 17 00:36:05.020190 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 17 00:36:05.020197 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 17 00:36:05.020205 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 17 00:36:05.020215 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 17 00:36:05.020223 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:36:05.020231 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:36:05.020238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 17 00:36:05.020248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 17 00:36:05.020257 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 17 00:36:05.020267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 17 00:36:05.020275 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 17 00:36:05.020284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 17 00:36:05.020291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 17 00:36:05.020297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 17 00:36:05.020304 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 17 00:36:05.020311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 17 00:36:05.020317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 17 00:36:05.020324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 17 00:36:05.020336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 17 00:36:05.020343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 17 00:36:05.020350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 17 00:36:05.020359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 17 00:36:05.020368 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 17 00:36:05.020376 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 17 00:36:05.020383 kernel: Zone ranges: May 17 00:36:05.020393 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:36:05.020401 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:36:05.020412 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:36:05.020418 kernel: Movable zone start for each node May 17 00:36:05.020429 kernel: Early memory node ranges May 17 00:36:05.020437 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:36:05.020446 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 17 00:36:05.020452 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 17 00:36:05.020462 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:36:05.020470 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 17 00:36:05.020479 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:36:05.020488 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:36:05.020499 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 17 00:36:05.020507 kernel: ACPI: PM-Timer IO Port: 0x408 May 17 00:36:05.020516 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 17 00:36:05.020523 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 17 00:36:05.020533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:36:05.020541 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:36:05.020550 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 17 00:36:05.020557 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:36:05.020568 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 17 00:36:05.020577 kernel: Booting paravirtualized kernel on Hyper-V May 17 00:36:05.020585 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:36:05.020594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:36:05.020602 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:36:05.020612 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:36:05.020619 kernel: pcpu-alloc: [0] 0 1 May 17 00:36:05.020627 kernel: Hyper-V: PV spinlocks enabled May 17 00:36:05.020635 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:36:05.020647 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 17 00:36:05.020654 kernel: Policy zone: Normal May 17 00:36:05.020664 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:36:05.020674 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:36:05.020683 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:36:05.020690 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:36:05.020700 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:36:05.020708 kernel: Memory: 8071676K/8387460K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 315524K reserved, 0K cma-reserved) May 17 00:36:05.020719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:36:05.020727 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:36:05.020744 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:36:05.020755 kernel: rcu: Hierarchical RCU implementation. May 17 00:36:05.020768 kernel: rcu: RCU event tracing is enabled. May 17 00:36:05.020778 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:36:05.020787 kernel: Rude variant of Tasks RCU enabled. May 17 00:36:05.020795 kernel: Tracing variant of Tasks RCU enabled. May 17 00:36:05.020804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:36:05.020814 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:36:05.020823 kernel: Using NULL legacy PIC May 17 00:36:05.020833 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 17 00:36:05.020841 kernel: Console: colour dummy device 80x25 May 17 00:36:05.020850 kernel: printk: console [tty1] enabled May 17 00:36:05.020858 kernel: printk: console [ttyS0] enabled May 17 00:36:05.020868 kernel: printk: bootconsole [earlyser0] disabled May 17 00:36:05.020878 kernel: ACPI: Core revision 20210730 May 17 00:36:05.020888 kernel: Failed to register legacy timer interrupt May 17 00:36:05.020896 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:36:05.020906 kernel: Hyper-V: Using IPI hypercalls May 17 00:36:05.020914 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) May 17 00:36:05.020921 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:36:05.020928 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:36:05.020935 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:36:05.020942 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:36:05.020949 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:36:05.020958 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:36:05.020966 kernel: RETBleed: Vulnerable May 17 00:36:05.020972 kernel: Speculative Store Bypass: Vulnerable May 17 00:36:05.020979 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:36:05.020987 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:36:05.020994 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:36:05.021001 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:36:05.021011 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:36:05.021018 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:36:05.021025 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:36:05.021037 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:36:05.021044 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:36:05.021053 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 17 00:36:05.021062 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 17 00:36:05.021072 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 17 00:36:05.021080 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 17 00:36:05.021087 kernel: Freeing SMP alternatives memory: 32K May 17 00:36:05.021096 kernel: pid_max: default: 32768 minimum: 301 May 17 00:36:05.021104 kernel: LSM: Security Framework initializing May 17 00:36:05.021114 kernel: SELinux: Initializing. May 17 00:36:05.021122 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:36:05.021131 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:36:05.021142 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:36:05.021151 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:36:05.021159 kernel: signal: max sigframe size: 3632 May 17 00:36:05.021169 kernel: rcu: Hierarchical SRCU implementation. May 17 00:36:05.021178 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:36:05.021187 kernel: smp: Bringing up secondary CPUs ... May 17 00:36:05.021194 kernel: x86: Booting SMP configuration: May 17 00:36:05.021204 kernel: .... node #0, CPUs: #1 May 17 00:36:05.021214 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 17 00:36:05.021224 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:36:05.021234 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:36:05.021243 kernel: smpboot: Max logical packages: 1 May 17 00:36:05.021253 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 17 00:36:05.021260 kernel: devtmpfs: initialized May 17 00:36:05.021270 kernel: x86/mm: Memory block size: 128MB May 17 00:36:05.021280 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 17 00:36:05.021288 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:36:05.021296 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:36:05.021308 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:36:05.021318 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:36:05.021325 kernel: audit: initializing netlink subsys (disabled) May 17 00:36:05.021336 kernel: audit: type=2000 audit(1747442164.025:1): state=initialized audit_enabled=0 res=1 May 17 00:36:05.021345 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:36:05.021354 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:36:05.021361 kernel: cpuidle: using governor menu May 17 00:36:05.021372 kernel: ACPI: bus type PCI registered May 17 00:36:05.021381 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:36:05.021391 kernel: dca service started, version 1.12.1 May 17 00:36:05.021401 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:36:05.021410 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:36:05.021420 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:36:05.021427 kernel: ACPI: Added _OSI(Module Device) May 17 00:36:05.021438 kernel: ACPI: Added _OSI(Processor Device) May 17 00:36:05.021447 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:36:05.021456 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:36:05.021464 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:36:05.021475 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:36:05.021486 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:36:05.021495 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:36:05.021508 kernel: ACPI: Interpreter enabled May 17 00:36:05.021521 kernel: ACPI: PM: (supports S0 S5) May 17 00:36:05.021536 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:36:05.021550 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:36:05.021564 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 17 00:36:05.021578 kernel: iommu: Default domain type: Translated May 17 00:36:05.021596 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:36:05.021610 kernel: vgaarb: loaded May 17 00:36:05.021624 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:36:05.021638 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:36:05.021652 kernel: PTP clock support registered May 17 00:36:05.021666 kernel: Registered efivars operations May 17 00:36:05.021680 kernel: PCI: Using ACPI for IRQ routing May 17 00:36:05.021694 kernel: PCI: System does not support PCI May 17 00:36:05.021709 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 17 00:36:05.021728 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:36:05.021745 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:36:05.021760 kernel: pnp: PnP ACPI init May 17 00:36:05.021787 kernel: pnp: PnP ACPI: found 3 devices May 17 00:36:05.021801 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:36:05.021816 kernel: NET: Registered PF_INET protocol family May 17 00:36:05.021831 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:36:05.021845 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:36:05.021862 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:36:05.021882 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:36:05.021896 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:36:05.021910 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:36:05.021924 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:36:05.021938 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:36:05.021952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:36:05.021965 kernel: NET: Registered PF_XDP protocol family May 17 00:36:05.021980 kernel: PCI: CLS 0 bytes, default 64 May 17 00:36:05.021993 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:36:05.022011 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) May 17 00:36:05.022025 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:36:05.022040 kernel: Initialise system trusted keyrings May 17 00:36:05.022055 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:36:05.022069 kernel: Key type asymmetric registered May 17 00:36:05.022084 kernel: Asymmetric key parser 'x509' registered May 17 00:36:05.022098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:36:05.022113 kernel: io scheduler mq-deadline registered May 17 00:36:05.022127 kernel: io scheduler kyber registered May 17 00:36:05.022147 kernel: io scheduler bfq registered May 17 00:36:05.022161 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:36:05.022175 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:36:05.022189 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:36:05.022205 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:36:05.022219 kernel: i8042: PNP: No PS/2 controller found. May 17 00:36:05.022412 kernel: rtc_cmos 00:02: registered as rtc0 May 17 00:36:05.022519 kernel: rtc_cmos 00:02: setting system clock to 2025-05-17T00:36:04 UTC (1747442164) May 17 00:36:05.022621 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 17 00:36:05.022637 kernel: intel_pstate: CPU model not supported May 17 00:36:05.022649 kernel: efifb: probing for efifb May 17 00:36:05.022661 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:36:05.022674 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:36:05.022686 kernel: efifb: scrolling: redraw May 17 00:36:05.022698 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:36:05.022711 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:36:05.022726 kernel: fb0: EFI VGA frame buffer device May 17 00:36:05.022738 kernel: pstore: Registered efi as persistent store backend May 17 00:36:05.022751 kernel: NET: Registered PF_INET6 protocol family May 17 00:36:05.022773 kernel: Segment Routing with IPv6 May 17 00:36:05.022786 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:36:05.022798 kernel: NET: Registered PF_PACKET protocol family May 17 00:36:05.022810 kernel: Key type dns_resolver registered May 17 00:36:05.022822 kernel: IPI shorthand broadcast: enabled May 17 00:36:05.022835 kernel: sched_clock: Marking stable (725062200, 26170600)->(966243700, -215010900) May 17 00:36:05.022847 kernel: registered taskstats version 1 May 17 00:36:05.022862 kernel: Loading compiled-in X.509 certificates May 17 00:36:05.022874 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:36:05.022886 kernel: Key type .fscrypt registered May 17 00:36:05.022898 kernel: Key type fscrypt-provisioning registered May 17 00:36:05.022910 kernel: pstore: Using crash dump compression: deflate May 17 00:36:05.022923 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:36:05.022935 kernel: ima: Allocated hash algorithm: sha1 May 17 00:36:05.022947 kernel: ima: No architecture policies found May 17 00:36:05.022962 kernel: clk: Disabling unused clocks May 17 00:36:05.022974 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:36:05.022986 kernel: Write protecting the kernel read-only data: 28672k May 17 00:36:05.022999 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:36:05.023011 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:36:05.023023 kernel: Run /init as init process May 17 00:36:05.023035 kernel: with arguments: May 17 00:36:05.023048 kernel: /init May 17 00:36:05.023059 kernel: with environment: May 17 00:36:05.023073 kernel: HOME=/ May 17 00:36:05.023085 kernel: TERM=linux May 17 00:36:05.023097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:36:05.023112 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:36:05.023126 systemd[1]: Detected virtualization microsoft. May 17 00:36:05.023140 systemd[1]: Detected architecture x86-64. May 17 00:36:05.023154 systemd[1]: Running in initrd. May 17 00:36:05.023168 systemd[1]: No hostname configured, using default hostname. May 17 00:36:05.023188 systemd[1]: Hostname set to . May 17 00:36:05.023202 systemd[1]: Initializing machine ID from random generator. May 17 00:36:05.023216 systemd[1]: Queued start job for default target initrd.target. May 17 00:36:05.023228 systemd[1]: Started systemd-ask-password-console.path. May 17 00:36:05.023240 systemd[1]: Reached target cryptsetup.target. May 17 00:36:05.023252 systemd[1]: Reached target paths.target. May 17 00:36:05.023264 systemd[1]: Reached target slices.target. May 17 00:36:05.023277 systemd[1]: Reached target swap.target. May 17 00:36:05.023291 systemd[1]: Reached target timers.target. May 17 00:36:05.023304 systemd[1]: Listening on iscsid.socket. May 17 00:36:05.023317 systemd[1]: Listening on iscsiuio.socket. May 17 00:36:05.023331 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:36:05.023344 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:36:05.023357 systemd[1]: Listening on systemd-journald.socket. May 17 00:36:05.023370 systemd[1]: Listening on systemd-networkd.socket. May 17 00:36:05.023384 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:36:05.023401 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:36:05.023414 systemd[1]: Reached target sockets.target. May 17 00:36:05.023429 systemd[1]: Starting kmod-static-nodes.service... May 17 00:36:05.023442 systemd[1]: Finished network-cleanup.service. May 17 00:36:05.023456 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:36:05.023470 systemd[1]: Starting systemd-journald.service... May 17 00:36:05.023485 systemd[1]: Starting systemd-modules-load.service... May 17 00:36:05.023499 systemd[1]: Starting systemd-resolved.service... May 17 00:36:05.023513 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:36:05.023533 systemd-journald[183]: Journal started May 17 00:36:05.023595 systemd-journald[183]: Runtime Journal (/run/log/journal/6bc36d67223b4ddc8be33617cb2c4f64) is 8.0M, max 159.0M, 151.0M free. May 17 00:36:05.025089 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:36:05.042782 systemd[1]: Started systemd-journald.service. May 17 00:36:05.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.059775 kernel: audit: type=1130 audit(1747442165.047:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.048287 systemd[1]: Finished kmod-static-nodes.service. May 17 00:36:05.059962 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:36:05.063465 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:36:05.069871 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:36:05.079381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:36:05.100802 kernel: audit: type=1130 audit(1747442165.058:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.104353 systemd-resolved[185]: Positive Trust Anchors: May 17 00:36:05.121231 kernel: audit: type=1130 audit(1747442165.062:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.104578 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:36:05.104628 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:36:05.108483 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:36:05.151072 kernel: audit: type=1130 audit(1747442165.068:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.139183 systemd[1]: Started systemd-resolved.service. May 17 00:36:05.151372 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:36:05.154871 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:36:05.177243 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:36:05.177271 kernel: audit: type=1130 audit(1747442165.150:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.175374 systemd[1]: Reached target nss-lookup.target. May 17 00:36:05.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.201987 kernel: audit: type=1130 audit(1747442165.153:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.202037 kernel: audit: type=1130 audit(1747442165.174:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.203187 systemd[1]: Starting dracut-cmdline.service... May 17 00:36:05.208482 kernel: Bridge firewalling registered May 17 00:36:05.206341 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:36:05.222538 dracut-cmdline[199]: dracut-dracut-053 May 17 00:36:05.226457 dracut-cmdline[199]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:36:05.248786 kernel: SCSI subsystem initialized May 17 00:36:05.269794 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:36:05.279118 kernel: device-mapper: uevent: version 1.0.3 May 17 00:36:05.279191 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:36:05.285876 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:36:05.286627 systemd[1]: Finished systemd-modules-load.service. May 17 00:36:05.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.304832 kernel: audit: type=1130 audit(1747442165.290:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.305082 systemd[1]: Starting systemd-sysctl.service... May 17 00:36:05.310886 kernel: Loading iSCSI transport class v2.0-870. May 17 00:36:05.322288 systemd[1]: Finished systemd-sysctl.service. May 17 00:36:05.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.335782 kernel: audit: type=1130 audit(1747442165.323:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.343787 kernel: iscsi: registered transport (tcp) May 17 00:36:05.369757 kernel: iscsi: registered transport (qla4xxx) May 17 00:36:05.369823 kernel: QLogic iSCSI HBA Driver May 17 00:36:05.399568 systemd[1]: Finished dracut-cmdline.service. May 17 00:36:05.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.404068 systemd[1]: Starting dracut-pre-udev.service... May 17 00:36:05.453788 kernel: raid6: avx512x4 gen() 18298 MB/s May 17 00:36:05.472777 kernel: raid6: avx512x4 xor() 8307 MB/s May 17 00:36:05.491774 kernel: raid6: avx512x2 gen() 18270 MB/s May 17 00:36:05.511777 kernel: raid6: avx512x2 xor() 29640 MB/s May 17 00:36:05.530776 kernel: raid6: avx512x1 gen() 18348 MB/s May 17 00:36:05.549785 kernel: raid6: avx512x1 xor() 26871 MB/s May 17 00:36:05.569787 kernel: raid6: avx2x4 gen() 18275 MB/s May 17 00:36:05.589783 kernel: raid6: avx2x4 xor() 7591 MB/s May 17 00:36:05.608778 kernel: raid6: avx2x2 gen() 18581 MB/s May 17 00:36:05.628780 kernel: raid6: avx2x2 xor() 22263 MB/s May 17 00:36:05.648776 kernel: raid6: avx2x1 gen() 14069 MB/s May 17 00:36:05.668776 kernel: raid6: avx2x1 xor() 19422 MB/s May 17 00:36:05.688781 kernel: raid6: sse2x4 gen() 11723 MB/s May 17 00:36:05.707779 kernel: raid6: sse2x4 xor() 7380 MB/s May 17 00:36:05.727775 kernel: raid6: sse2x2 gen() 12867 MB/s May 17 00:36:05.747776 kernel: raid6: sse2x2 xor() 7721 MB/s May 17 00:36:05.766781 kernel: raid6: sse2x1 gen() 11623 MB/s May 17 00:36:05.789533 kernel: raid6: sse2x1 xor() 5912 MB/s May 17 00:36:05.789567 kernel: raid6: using algorithm avx2x2 gen() 18581 MB/s May 17 00:36:05.789580 kernel: raid6: .... xor() 22263 MB/s, rmw enabled May 17 00:36:05.796963 kernel: raid6: using avx512x2 recovery algorithm May 17 00:36:05.811789 kernel: xor: automatically using best checksumming function avx May 17 00:36:05.907788 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:36:05.915781 systemd[1]: Finished dracut-pre-udev.service. May 17 00:36:05.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.919000 audit: BPF prog-id=7 op=LOAD May 17 00:36:05.919000 audit: BPF prog-id=8 op=LOAD May 17 00:36:05.920541 systemd[1]: Starting systemd-udevd.service... May 17 00:36:05.934759 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 17 00:36:05.939386 systemd[1]: Started systemd-udevd.service. May 17 00:36:05.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.946991 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:36:05.963738 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 17 00:36:05.994385 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:36:05.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.997414 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:36:06.032897 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:36:06.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:06.090815 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:36:06.117801 kernel: hv_vmbus: Vmbus version:5.2 May 17 00:36:06.128786 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:36:06.149804 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:36:06.149853 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:36:06.155694 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:36:06.163349 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:36:06.163412 kernel: AES CTR mode by8 optimization enabled May 17 00:36:06.167804 kernel: scsi host0: storvsc_host_t May 17 00:36:06.173957 kernel: scsi host1: storvsc_host_t May 17 00:36:06.174155 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:36:06.174169 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:36:06.184829 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:36:06.200804 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:36:06.222675 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:36:06.222739 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:36:06.222952 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:36:06.232019 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:36:06.232041 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:36:06.246011 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:36:06.265476 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:36:06.265634 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:36:06.265806 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:36:06.265966 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:36:06.266115 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:36:06.266128 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:36:06.351798 kernel: hv_netvsc 7c1e5202-de5b-7c1e-5202-de5b7c1e5202 eth0: VF slot 1 added May 17 00:36:06.364569 kernel: hv_vmbus: registering driver hv_pci May 17 00:36:06.364623 kernel: hv_pci 822200dd-febb-42e8-91e2-33713a3a15e5: PCI VMBus probing: Using version 0x10004 May 17 00:36:06.443643 kernel: hv_pci 822200dd-febb-42e8-91e2-33713a3a15e5: PCI host bridge to bus febb:00 May 17 00:36:06.443846 kernel: pci_bus febb:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 17 00:36:06.444020 kernel: pci_bus febb:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:36:06.444164 kernel: pci febb:00:02.0: [15b3:1016] type 00 class 0x020000 May 17 00:36:06.444332 kernel: pci febb:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:36:06.444489 kernel: pci febb:00:02.0: enabling Extended Tags May 17 00:36:06.444649 kernel: pci febb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at febb:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:36:06.444817 kernel: pci_bus febb:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:36:06.444963 kernel: pci febb:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:36:06.535795 kernel: mlx5_core febb:00:02.0: firmware version: 14.30.5000 May 17 00:36:06.783105 kernel: mlx5_core febb:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 00:36:06.783288 kernel: mlx5_core febb:00:02.0: Supported tc offload range - chains: 1, prios: 1 May 17 00:36:06.783444 kernel: mlx5_core febb:00:02.0: mlx5e_tc_post_act_init:40:(pid 281): firmware level support is missing May 17 00:36:06.783606 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (443) May 17 00:36:06.783627 kernel: hv_netvsc 7c1e5202-de5b-7c1e-5202-de5b7c1e5202 eth0: VF registering: eth1 May 17 00:36:06.783794 kernel: mlx5_core febb:00:02.0 eth1: joined to eth0 May 17 00:36:06.745939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:36:06.794788 kernel: mlx5_core febb:00:02.0 enP65211s1: renamed from eth1 May 17 00:36:06.796240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:36:06.921536 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:36:06.924977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:36:06.933515 systemd[1]: Starting disk-uuid.service... May 17 00:36:07.022758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:36:07.957738 disk-uuid[561]: The operation has completed successfully. May 17 00:36:07.960343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:36:08.041581 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:36:08.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.041685 systemd[1]: Finished disk-uuid.service. May 17 00:36:08.049158 systemd[1]: Starting verity-setup.service... May 17 00:36:08.083899 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:36:08.448192 systemd[1]: Found device dev-mapper-usr.device. May 17 00:36:08.451097 systemd[1]: Mounting sysusr-usr.mount... May 17 00:36:08.457046 systemd[1]: Finished verity-setup.service. May 17 00:36:08.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.528536 systemd[1]: Mounted sysusr-usr.mount. May 17 00:36:08.532983 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:36:08.530283 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:36:08.531513 systemd[1]: Starting ignition-setup.service... May 17 00:36:08.540846 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:36:08.554843 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:36:08.554887 kernel: BTRFS info (device sda6): using free space tree May 17 00:36:08.554902 kernel: BTRFS info (device sda6): has skinny extents May 17 00:36:08.611669 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:36:08.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.616000 audit: BPF prog-id=9 op=LOAD May 17 00:36:08.618202 systemd[1]: Starting systemd-networkd.service... May 17 00:36:08.640344 systemd-networkd[802]: lo: Link UP May 17 00:36:08.640354 systemd-networkd[802]: lo: Gained carrier May 17 00:36:08.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.641313 systemd-networkd[802]: Enumeration completed May 17 00:36:08.641400 systemd[1]: Started systemd-networkd.service. May 17 00:36:08.643973 systemd[1]: Reached target network.target. May 17 00:36:08.645157 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:36:08.649691 systemd[1]: Starting iscsiuio.service... May 17 00:36:08.664623 systemd[1]: Started iscsiuio.service. May 17 00:36:08.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.667402 systemd[1]: Starting iscsid.service... May 17 00:36:08.673885 iscsid[810]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:36:08.673885 iscsid[810]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:36:08.673885 iscsid[810]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:36:08.673885 iscsid[810]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:36:08.694486 iscsid[810]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:36:08.694486 iscsid[810]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:36:08.697528 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:36:08.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.697896 systemd[1]: Started iscsid.service. May 17 00:36:08.702723 systemd[1]: Starting dracut-initqueue.service... May 17 00:36:08.711791 kernel: mlx5_core febb:00:02.0 enP65211s1: Link up May 17 00:36:08.719160 systemd[1]: Finished dracut-initqueue.service. May 17 00:36:08.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.721367 systemd[1]: Reached target remote-fs-pre.target. May 17 00:36:08.724960 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:36:08.726935 systemd[1]: Reached target remote-fs.target. May 17 00:36:08.728571 systemd[1]: Starting dracut-pre-mount.service... May 17 00:36:08.740224 systemd[1]: Finished dracut-pre-mount.service. May 17 00:36:08.745104 kernel: hv_netvsc 7c1e5202-de5b-7c1e-5202-de5b7c1e5202 eth0: Data path switched to VF: enP65211s1 May 17 00:36:08.750168 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:36:08.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.751430 systemd-networkd[802]: enP65211s1: Link UP May 17 00:36:08.753395 systemd-networkd[802]: eth0: Link UP May 17 00:36:08.754997 systemd-networkd[802]: eth0: Gained carrier May 17 00:36:08.760929 systemd-networkd[802]: enP65211s1: Gained carrier May 17 00:36:08.791851 systemd-networkd[802]: eth0: DHCPv4 address 10.200.4.30/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:36:08.799151 systemd[1]: Finished ignition-setup.service. May 17 00:36:08.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.804266 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:36:10.508014 systemd-networkd[802]: eth0: Gained IPv6LL May 17 00:36:12.022242 ignition[829]: Ignition 2.14.0 May 17 00:36:12.022259 ignition[829]: Stage: fetch-offline May 17 00:36:12.022348 ignition[829]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:36:12.022399 ignition[829]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:36:12.165517 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:36:12.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.167315 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:36:12.190398 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:36:12.190432 kernel: audit: type=1130 audit(1747442172.170:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.165732 ignition[829]: parsed url from cmdline: "" May 17 00:36:12.172848 systemd[1]: Starting ignition-fetch.service... May 17 00:36:12.165736 ignition[829]: no config URL provided May 17 00:36:12.165744 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:36:12.165753 ignition[829]: no config at "/usr/lib/ignition/user.ign" May 17 00:36:12.165760 ignition[829]: failed to fetch config: resource requires networking May 17 00:36:12.166210 ignition[829]: Ignition finished successfully May 17 00:36:12.181856 ignition[835]: Ignition 2.14.0 May 17 00:36:12.181864 ignition[835]: Stage: fetch May 17 00:36:12.181997 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:36:12.182034 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:36:12.186574 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:36:12.189672 ignition[835]: parsed url from cmdline: "" May 17 00:36:12.189678 ignition[835]: no config URL provided May 17 00:36:12.189688 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:36:12.189705 ignition[835]: no config at "/usr/lib/ignition/user.ign" May 17 00:36:12.189745 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:36:12.250115 ignition[835]: GET result: OK May 17 00:36:12.250207 ignition[835]: config has been read from IMDS userdata May 17 00:36:12.250238 ignition[835]: parsing config with SHA512: b4a8edf14c1b2d7db696311caa5ec27be9b9c4b2efb2ee96f7b38cecf118b244ee09a7537cc20131c86007d861bb7b025e9f39880135e6e0492cb523c0980718 May 17 00:36:12.253901 unknown[835]: fetched base config from "system" May 17 00:36:12.253911 unknown[835]: fetched base config from "system" May 17 00:36:12.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.254521 ignition[835]: fetch: fetch complete May 17 00:36:12.274451 kernel: audit: type=1130 audit(1747442172.257:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.253920 unknown[835]: fetched user config from "azure" May 17 00:36:12.254529 ignition[835]: fetch: fetch passed May 17 00:36:12.256078 systemd[1]: Finished ignition-fetch.service. May 17 00:36:12.254582 ignition[835]: Ignition finished successfully May 17 00:36:12.259097 systemd[1]: Starting ignition-kargs.service... May 17 00:36:12.283184 ignition[841]: Ignition 2.14.0 May 17 00:36:12.283191 ignition[841]: Stage: kargs May 17 00:36:12.283330 ignition[841]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:36:12.283362 ignition[841]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:36:12.289855 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:36:12.290898 ignition[841]: kargs: kargs passed May 17 00:36:12.311005 kernel: audit: type=1130 audit(1747442172.297:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.290950 ignition[841]: Ignition finished successfully May 17 00:36:12.294267 systemd[1]: Finished ignition-kargs.service. May 17 00:36:12.306937 ignition[847]: Ignition 2.14.0 May 17 00:36:12.299226 systemd[1]: Starting ignition-disks.service... May 17 00:36:12.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.306947 ignition[847]: Stage: disks May 17 00:36:12.332987 kernel: audit: type=1130 audit(1747442172.318:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.315854 systemd[1]: Finished ignition-disks.service. May 17 00:36:12.307105 ignition[847]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:36:12.319232 systemd[1]: Reached target initrd-root-device.target. May 17 00:36:12.307132 ignition[847]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:36:12.332984 systemd[1]: Reached target local-fs-pre.target. May 17 00:36:12.312245 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:36:12.336548 systemd[1]: Reached target local-fs.target. May 17 00:36:12.314956 ignition[847]: disks: disks passed May 17 00:36:12.340690 systemd[1]: Reached target sysinit.target. May 17 00:36:12.315010 ignition[847]: Ignition finished successfully May 17 00:36:12.344657 systemd[1]: Reached target basic.target. May 17 00:36:12.349052 systemd[1]: Starting systemd-fsck-root.service... May 17 00:36:12.504534 systemd-fsck[855]: ROOT: clean, 619/7326000 files, 481079/7359488 blocks May 17 00:36:12.518385 systemd[1]: Finished systemd-fsck-root.service. May 17 00:36:12.534047 kernel: audit: type=1130 audit(1747442172.519:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:12.521388 systemd[1]: Mounting sysroot.mount... May 17 00:36:12.547786 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:36:12.548429 systemd[1]: Mounted sysroot.mount. May 17 00:36:12.551602 systemd[1]: Reached target initrd-root-fs.target. May 17 00:36:12.623863 systemd[1]: Mounting sysroot-usr.mount... May 17 00:36:12.629305 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:36:12.633549 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:36:12.633587 systemd[1]: Reached target ignition-diskful.target. May 17 00:36:12.640504 systemd[1]: Mounted sysroot-usr.mount. May 17 00:36:12.696756 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:36:12.702981 systemd[1]: Starting initrd-setup-root.service... May 17 00:36:12.712462 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (866) May 17 00:36:12.720790 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:36:12.720845 kernel: BTRFS info (device sda6): using free space tree May 17 00:36:12.720857 kernel: BTRFS info (device sda6): has skinny extents May 17 00:36:12.725586 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:36:12.733142 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:36:12.754049 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory May 17 00:36:12.775371 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:36:12.780429 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:36:13.397799 systemd[1]: Finished initrd-setup-root.service. May 17 00:36:13.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:13.402762 systemd[1]: Starting ignition-mount.service... May 17 00:36:13.416751 kernel: audit: type=1130 audit(1747442173.401:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:13.415564 systemd[1]: Starting sysroot-boot.service... May 17 00:36:13.421381 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:36:13.421507 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:36:13.441880 systemd[1]: Finished sysroot-boot.service. May 17 00:36:13.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:13.454792 kernel: audit: type=1130 audit(1747442173.443:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:13.463365 ignition[934]: INFO : Ignition 2.14.0 May 17 00:36:13.463365 ignition[934]: INFO : Stage: mount May 17 00:36:13.466862 ignition[934]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:36:13.466862 ignition[934]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:36:13.479784 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:36:13.483373 ignition[934]: INFO : mount: mount passed May 17 00:36:13.484979 ignition[934]: INFO : Ignition finished successfully May 17 00:36:13.487547 systemd[1]: Finished ignition-mount.service. May 17 00:36:13.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:13.502822 kernel: audit: type=1130 audit(1747442173.489:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.422242 coreos-metadata[865]: May 17 00:36:14.422 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:36:14.440253 coreos-metadata[865]: May 17 00:36:14.440 INFO Fetch successful May 17 00:36:14.475091 coreos-metadata[865]: May 17 00:36:14.474 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:36:14.488258 coreos-metadata[865]: May 17 00:36:14.488 INFO Fetch successful May 17 00:36:14.513709 coreos-metadata[865]: May 17 00:36:14.513 INFO wrote hostname ci-3510.3.7-n-51492a5456 to /sysroot/etc/hostname May 17 00:36:14.519152 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:36:14.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.524927 systemd[1]: Starting ignition-files.service... May 17 00:36:14.541062 kernel: audit: type=1130 audit(1747442174.524:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.541932 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:36:14.552784 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (944) May 17 00:36:14.552824 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:36:14.564023 kernel: BTRFS info (device sda6): using free space tree May 17 00:36:14.564077 kernel: BTRFS info (device sda6): has skinny extents May 17 00:36:14.572543 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:36:14.585208 ignition[963]: INFO : Ignition 2.14.0 May 17 00:36:14.585208 ignition[963]: INFO : Stage: files May 17 00:36:14.589399 ignition[963]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:36:14.589399 ignition[963]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:36:14.598102 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:36:14.617807 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 17 00:36:14.620928 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:36:14.620928 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:36:14.660275 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:36:14.663537 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:36:14.683627 unknown[963]: wrote ssh authorized keys file for user: core May 17 00:36:14.686358 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:36:14.689740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 17 00:36:14.693477 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:36:14.697503 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:36:14.701423 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:36:14.705386 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:36:14.710879 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:36:14.716415 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:36:14.720461 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:36:14.728593 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1939668635" May 17 00:36:14.733057 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1939668635": device or resource busy May 17 00:36:14.733057 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1939668635", trying btrfs: device or resource busy May 17 00:36:14.733057 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1939668635" May 17 00:36:14.733057 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1939668635" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem1939668635" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem1939668635" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1756386624" May 17 00:36:14.752257 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1756386624": device or resource busy May 17 00:36:14.752257 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1756386624", trying btrfs: device or resource busy May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1756386624" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1756386624" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1756386624" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1756386624" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:36:14.752257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:36:14.738670 systemd[1]: mnt-oem1939668635.mount: Deactivated successfully. May 17 00:36:15.419156 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK May 17 00:36:15.600162 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:36:15.600162 ignition[963]: INFO : files: op(f): [started] processing unit "waagent.service" May 17 00:36:15.600162 ignition[963]: INFO : files: op(f): [finished] processing unit "waagent.service" May 17 00:36:15.600162 ignition[963]: INFO : files: op(10): [started] processing unit "nvidia.service" May 17 00:36:15.600162 ignition[963]: INFO : files: op(10): [finished] processing unit "nvidia.service" May 17 00:36:15.633027 kernel: audit: type=1130 audit(1747442175.609:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.633146 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" May 17 00:36:15.633146 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" May 17 00:36:15.633146 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" May 17 00:36:15.633146 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" May 17 00:36:15.633146 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:36:15.633146 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:36:15.633146 ignition[963]: INFO : files: files passed May 17 00:36:15.633146 ignition[963]: INFO : Ignition finished successfully May 17 00:36:15.606204 systemd[1]: Finished ignition-files.service. May 17 00:36:15.610749 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:36:15.666227 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:36:15.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.625594 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:36:15.628267 systemd[1]: Starting ignition-quench.service... May 17 00:36:15.634927 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:36:15.635031 systemd[1]: Finished ignition-quench.service. May 17 00:36:15.662316 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:36:15.666328 systemd[1]: Reached target ignition-complete.target. May 17 00:36:15.672147 systemd[1]: Starting initrd-parse-etc.service... May 17 00:36:15.691164 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:36:15.691241 systemd[1]: Finished initrd-parse-etc.service. May 17 00:36:15.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.696864 systemd[1]: Reached target initrd-fs.target. May 17 00:36:15.700291 systemd[1]: Reached target initrd.target. May 17 00:36:15.703657 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:36:15.707084 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:36:15.717601 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:36:15.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.721732 systemd[1]: Starting initrd-cleanup.service... May 17 00:36:15.731361 systemd[1]: Stopped target nss-lookup.target. May 17 00:36:15.734944 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:36:15.738636 systemd[1]: Stopped target timers.target. May 17 00:36:15.742283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:36:15.744417 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:36:15.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.747843 systemd[1]: Stopped target initrd.target. May 17 00:36:15.750889 systemd[1]: Stopped target basic.target. May 17 00:36:15.754104 systemd[1]: Stopped target ignition-complete.target. May 17 00:36:15.757896 systemd[1]: Stopped target ignition-diskful.target. May 17 00:36:15.761554 systemd[1]: Stopped target initrd-root-device.target. May 17 00:36:15.765389 systemd[1]: Stopped target remote-fs.target. May 17 00:36:15.768672 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:36:15.772559 systemd[1]: Stopped target sysinit.target. May 17 00:36:15.775688 systemd[1]: Stopped target local-fs.target. May 17 00:36:15.778918 systemd[1]: Stopped target local-fs-pre.target. May 17 00:36:15.782188 systemd[1]: Stopped target swap.target. May 17 00:36:15.785748 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:36:15.788034 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:36:15.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.791418 systemd[1]: Stopped target cryptsetup.target. May 17 00:36:15.794921 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:36:15.796919 systemd[1]: Stopped dracut-initqueue.service. May 17 00:36:15.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.800369 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:36:15.803082 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:36:15.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.807457 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:36:15.809863 systemd[1]: Stopped ignition-files.service. May 17 00:36:15.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.813661 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:36:15.815897 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:36:15.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.820804 systemd[1]: Stopping ignition-mount.service... May 17 00:36:15.838337 iscsid[810]: iscsid shutting down. May 17 00:36:15.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.822606 systemd[1]: Stopping iscsid.service... May 17 00:36:15.824238 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:36:15.850131 ignition[1001]: INFO : Ignition 2.14.0 May 17 00:36:15.850131 ignition[1001]: INFO : Stage: umount May 17 00:36:15.850131 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:36:15.850131 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:36:15.824393 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:36:15.865958 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:36:15.865958 ignition[1001]: INFO : umount: umount passed May 17 00:36:15.865958 ignition[1001]: INFO : Ignition finished successfully May 17 00:36:15.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.827544 systemd[1]: Stopping sysroot-boot.service... May 17 00:36:15.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.829379 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:36:15.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.829556 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:36:15.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.831794 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:36:15.831947 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:36:15.838173 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:36:15.838280 systemd[1]: Stopped iscsid.service. May 17 00:36:15.841683 systemd[1]: Stopping iscsiuio.service... May 17 00:36:15.862794 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:36:15.862900 systemd[1]: Stopped iscsiuio.service. May 17 00:36:15.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.866229 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:36:15.866315 systemd[1]: Finished initrd-cleanup.service. May 17 00:36:15.872394 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:36:15.872859 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:36:15.872940 systemd[1]: Stopped ignition-mount.service. May 17 00:36:15.876724 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:36:15.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.876786 systemd[1]: Stopped ignition-disks.service. May 17 00:36:15.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.879277 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:36:15.879328 systemd[1]: Stopped ignition-kargs.service. May 17 00:36:15.937000 audit: BPF prog-id=6 op=UNLOAD May 17 00:36:15.881295 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:36:15.881336 systemd[1]: Stopped ignition-fetch.service. May 17 00:36:15.885172 systemd[1]: Stopped target network.target. May 17 00:36:15.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.886759 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:36:15.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.886829 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:36:15.890387 systemd[1]: Stopped target paths.target. May 17 00:36:15.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.891981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:36:15.896833 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:36:15.900000 systemd[1]: Stopped target slices.target. May 17 00:36:15.901468 systemd[1]: Stopped target sockets.target. May 17 00:36:15.904590 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:36:15.904642 systemd[1]: Closed iscsid.socket. May 17 00:36:15.907578 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:36:15.907620 systemd[1]: Closed iscsiuio.socket. May 17 00:36:15.910943 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:36:15.910999 systemd[1]: Stopped ignition-setup.service. May 17 00:36:15.914614 systemd[1]: Stopping systemd-networkd.service... May 17 00:36:15.918070 systemd[1]: Stopping systemd-resolved.service... May 17 00:36:15.923817 systemd-networkd[802]: eth0: DHCPv6 lease lost May 17 00:36:15.982000 audit: BPF prog-id=9 op=UNLOAD May 17 00:36:15.926241 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:36:15.926330 systemd[1]: Stopped systemd-networkd.service. May 17 00:36:15.932242 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:36:15.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.932335 systemd[1]: Stopped systemd-resolved.service. May 17 00:36:15.938001 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:36:15.938037 systemd[1]: Closed systemd-networkd.socket. May 17 00:36:15.941643 systemd[1]: Stopping network-cleanup.service... May 17 00:36:15.944842 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:36:15.944901 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:36:15.948324 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:36:15.949782 systemd[1]: Stopped systemd-sysctl.service. May 17 00:36:15.953822 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:36:15.955772 systemd[1]: Stopped systemd-modules-load.service. May 17 00:36:15.961838 systemd[1]: Stopping systemd-udevd.service... May 17 00:36:15.972491 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:36:15.986410 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:36:15.986567 systemd[1]: Stopped systemd-udevd.service. May 17 00:36:16.011955 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:36:16.012013 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:36:16.023475 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:36:16.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.023525 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:36:16.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.025747 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:36:16.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.025805 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:36:16.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.029560 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:36:16.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.029613 systemd[1]: Stopped dracut-cmdline.service. May 17 00:36:16.032943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:36:16.033012 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:36:16.036148 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:36:16.039855 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:36:16.039917 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:36:16.044822 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:36:16.067492 kernel: hv_netvsc 7c1e5202-de5b-7c1e-5202-de5b7c1e5202 eth0: Data path switched from VF: enP65211s1 May 17 00:36:16.044917 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:36:16.084173 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:36:16.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.084315 systemd[1]: Stopped network-cleanup.service. May 17 00:36:16.217795 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:36:16.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.217929 systemd[1]: Stopped sysroot-boot.service. May 17 00:36:16.222063 systemd[1]: Reached target initrd-switch-root.target. May 17 00:36:16.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:16.225476 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:36:16.225543 systemd[1]: Stopped initrd-setup-root.service. May 17 00:36:16.230492 systemd[1]: Starting initrd-switch-root.service... May 17 00:36:16.243591 systemd[1]: Switching root. May 17 00:36:16.268757 systemd-journald[183]: Journal stopped May 17 00:36:31.353364 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 17 00:36:31.353400 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:36:31.353414 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:36:31.353426 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:36:31.353435 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:36:31.353446 kernel: SELinux: policy capability open_perms=1 May 17 00:36:31.353462 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:36:31.353475 kernel: SELinux: policy capability always_check_network=0 May 17 00:36:31.353488 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:36:31.353518 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:36:31.353531 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:36:31.353544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:36:31.353557 kernel: kauditd_printk_skb: 42 callbacks suppressed May 17 00:36:31.353571 kernel: audit: type=1403 audit(1747442178.800:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:36:31.353592 systemd[1]: Successfully loaded SELinux policy in 300.761ms. May 17 00:36:31.353610 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 71.772ms. May 17 00:36:31.353628 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:36:31.353645 systemd[1]: Detected virtualization microsoft. May 17 00:36:31.353662 systemd[1]: Detected architecture x86-64. May 17 00:36:31.353678 systemd[1]: Detected first boot. May 17 00:36:31.353695 systemd[1]: Hostname set to . May 17 00:36:31.353712 systemd[1]: Initializing machine ID from random generator. May 17 00:36:31.353730 kernel: audit: type=1400 audit(1747442179.799:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:36:31.353748 kernel: audit: type=1400 audit(1747442179.817:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:36:31.353791 kernel: audit: type=1400 audit(1747442179.817:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:36:31.353811 kernel: audit: type=1334 audit(1747442179.828:85): prog-id=10 op=LOAD May 17 00:36:31.353822 kernel: audit: type=1334 audit(1747442179.828:86): prog-id=10 op=UNLOAD May 17 00:36:31.353833 kernel: audit: type=1334 audit(1747442179.839:87): prog-id=11 op=LOAD May 17 00:36:31.353845 kernel: audit: type=1334 audit(1747442179.839:88): prog-id=11 op=UNLOAD May 17 00:36:31.353855 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:36:31.353866 kernel: audit: type=1400 audit(1747442181.448:89): avc: denied { associate } for pid=1034 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:36:31.353878 kernel: audit: type=1300 audit(1747442181.448:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:31.353893 systemd[1]: Populated /etc with preset unit settings. May 17 00:36:31.353903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:36:31.353916 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:36:31.353929 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:36:31.353939 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:36:31.353950 kernel: audit: type=1334 audit(1747442190.831:91): prog-id=12 op=LOAD May 17 00:36:31.353961 kernel: audit: type=1334 audit(1747442190.831:92): prog-id=3 op=UNLOAD May 17 00:36:31.353972 kernel: audit: type=1334 audit(1747442190.836:93): prog-id=13 op=LOAD May 17 00:36:31.353987 kernel: audit: type=1334 audit(1747442190.840:94): prog-id=14 op=LOAD May 17 00:36:31.353999 kernel: audit: type=1334 audit(1747442190.840:95): prog-id=4 op=UNLOAD May 17 00:36:31.354009 kernel: audit: type=1334 audit(1747442190.840:96): prog-id=5 op=UNLOAD May 17 00:36:31.354020 kernel: audit: type=1334 audit(1747442190.844:97): prog-id=15 op=LOAD May 17 00:36:31.354032 kernel: audit: type=1334 audit(1747442190.844:98): prog-id=12 op=UNLOAD May 17 00:36:31.354040 kernel: audit: type=1334 audit(1747442190.848:99): prog-id=16 op=LOAD May 17 00:36:31.354052 kernel: audit: type=1334 audit(1747442190.853:100): prog-id=17 op=LOAD May 17 00:36:31.354064 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:36:31.354076 systemd[1]: Stopped initrd-switch-root.service. May 17 00:36:31.354088 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:36:31.354101 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:36:31.354112 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:36:31.354123 systemd[1]: Created slice system-getty.slice. May 17 00:36:31.354136 systemd[1]: Created slice system-modprobe.slice. May 17 00:36:31.354146 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:36:31.354158 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:36:31.354173 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:36:31.354185 systemd[1]: Created slice user.slice. May 17 00:36:31.354196 systemd[1]: Started systemd-ask-password-console.path. May 17 00:36:31.354208 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:36:31.354219 systemd[1]: Set up automount boot.automount. May 17 00:36:31.354230 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:36:31.354242 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:36:31.354253 systemd[1]: Stopped target initrd-fs.target. May 17 00:36:31.354264 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:36:31.354278 systemd[1]: Reached target integritysetup.target. May 17 00:36:31.354290 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:36:31.354301 systemd[1]: Reached target remote-fs.target. May 17 00:36:31.354313 systemd[1]: Reached target slices.target. May 17 00:36:31.354326 systemd[1]: Reached target swap.target. May 17 00:36:31.354338 systemd[1]: Reached target torcx.target. May 17 00:36:31.354349 systemd[1]: Reached target veritysetup.target. May 17 00:36:31.354363 systemd[1]: Listening on systemd-coredump.socket. May 17 00:36:31.354376 systemd[1]: Listening on systemd-initctl.socket. May 17 00:36:31.354386 systemd[1]: Listening on systemd-networkd.socket. May 17 00:36:31.354399 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:36:31.354412 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:36:31.354426 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:36:31.354438 systemd[1]: Mounting dev-hugepages.mount... May 17 00:36:31.354450 systemd[1]: Mounting dev-mqueue.mount... May 17 00:36:31.354461 systemd[1]: Mounting media.mount... May 17 00:36:31.354474 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:31.354485 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:36:31.354497 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:36:31.354510 systemd[1]: Mounting tmp.mount... May 17 00:36:31.354521 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:36:31.354535 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:31.354549 systemd[1]: Starting kmod-static-nodes.service... May 17 00:36:31.354558 systemd[1]: Starting modprobe@configfs.service... May 17 00:36:31.354571 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:31.354584 systemd[1]: Starting modprobe@drm.service... May 17 00:36:31.354594 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:36:31.354606 systemd[1]: Starting modprobe@fuse.service... May 17 00:36:31.354619 systemd[1]: Starting modprobe@loop.service... May 17 00:36:31.354630 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:36:31.354644 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:36:31.354657 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:36:31.354668 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:36:31.354680 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:36:31.354692 systemd[1]: Stopped systemd-journald.service. May 17 00:36:31.354705 systemd[1]: Starting systemd-journald.service... May 17 00:36:31.354717 systemd[1]: Starting systemd-modules-load.service... May 17 00:36:31.354728 kernel: loop: module loaded May 17 00:36:31.354739 systemd[1]: Starting systemd-network-generator.service... May 17 00:36:31.354754 systemd[1]: Starting systemd-remount-fs.service... May 17 00:36:31.355783 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:36:31.355803 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:36:31.355817 systemd[1]: Stopped verity-setup.service. May 17 00:36:31.355827 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:31.355841 systemd[1]: Mounted dev-hugepages.mount. May 17 00:36:31.355854 systemd[1]: Mounted dev-mqueue.mount. May 17 00:36:31.355864 systemd[1]: Mounted media.mount. May 17 00:36:31.355880 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:36:31.355891 kernel: fuse: init (API version 7.34) May 17 00:36:31.355903 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:36:31.355914 systemd[1]: Mounted tmp.mount. May 17 00:36:31.355926 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:36:31.355936 systemd[1]: Finished kmod-static-nodes.service. May 17 00:36:31.355953 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:36:31.355968 systemd[1]: Finished modprobe@configfs.service. May 17 00:36:31.355979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:31.355992 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:31.356005 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:36:31.356019 systemd[1]: Finished modprobe@drm.service. May 17 00:36:31.356031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:36:31.356041 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:36:31.356051 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:36:31.356064 systemd[1]: Finished modprobe@fuse.service. May 17 00:36:31.356075 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:31.356091 systemd-journald[1127]: Journal started May 17 00:36:31.356143 systemd-journald[1127]: Runtime Journal (/run/log/journal/1ae0ccc6f7824e0e985f482ad0047475) is 8.0M, max 159.0M, 151.0M free. May 17 00:36:18.800000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:36:19.799000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:36:19.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:36:19.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:36:19.828000 audit: BPF prog-id=10 op=LOAD May 17 00:36:19.828000 audit: BPF prog-id=10 op=UNLOAD May 17 00:36:19.839000 audit: BPF prog-id=11 op=LOAD May 17 00:36:19.839000 audit: BPF prog-id=11 op=UNLOAD May 17 00:36:21.448000 audit[1034]: AVC avc: denied { associate } for pid=1034 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:36:21.448000 audit[1034]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:21.448000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:36:21.455000 audit[1034]: AVC avc: denied { associate } for pid=1034 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:36:21.455000 audit[1034]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:21.455000 audit: CWD cwd="/" May 17 00:36:21.455000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:21.455000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:21.455000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:36:30.831000 audit: BPF prog-id=12 op=LOAD May 17 00:36:30.831000 audit: BPF prog-id=3 op=UNLOAD May 17 00:36:30.836000 audit: BPF prog-id=13 op=LOAD May 17 00:36:30.840000 audit: BPF prog-id=14 op=LOAD May 17 00:36:30.840000 audit: BPF prog-id=4 op=UNLOAD May 17 00:36:30.840000 audit: BPF prog-id=5 op=UNLOAD May 17 00:36:30.844000 audit: BPF prog-id=15 op=LOAD May 17 00:36:30.844000 audit: BPF prog-id=12 op=UNLOAD May 17 00:36:30.848000 audit: BPF prog-id=16 op=LOAD May 17 00:36:30.853000 audit: BPF prog-id=17 op=LOAD May 17 00:36:30.853000 audit: BPF prog-id=13 op=UNLOAD May 17 00:36:30.853000 audit: BPF prog-id=14 op=UNLOAD May 17 00:36:30.857000 audit: BPF prog-id=18 op=LOAD May 17 00:36:30.857000 audit: BPF prog-id=15 op=UNLOAD May 17 00:36:30.878000 audit: BPF prog-id=19 op=LOAD May 17 00:36:30.878000 audit: BPF prog-id=20 op=LOAD May 17 00:36:30.878000 audit: BPF prog-id=16 op=UNLOAD May 17 00:36:30.878000 audit: BPF prog-id=17 op=UNLOAD May 17 00:36:30.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:30.889000 audit: BPF prog-id=18 op=UNLOAD May 17 00:36:30.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:30.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.191000 audit: BPF prog-id=21 op=LOAD May 17 00:36:31.192000 audit: BPF prog-id=22 op=LOAD May 17 00:36:31.192000 audit: BPF prog-id=23 op=LOAD May 17 00:36:31.192000 audit: BPF prog-id=19 op=UNLOAD May 17 00:36:31.192000 audit: BPF prog-id=20 op=UNLOAD May 17 00:36:31.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.361809 systemd[1]: Finished modprobe@loop.service. May 17 00:36:31.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.344000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:36:31.344000 audit[1127]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff6f3f71f0 a2=4000 a3=7fff6f3f728c items=0 ppid=1 pid=1127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:31.344000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:36:31.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:21.378353 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:36:30.830679 systemd[1]: Queued start job for default target multi-user.target. May 17 00:36:21.394523 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:36:30.830691 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:36:21.394549 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:36:30.879254 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:36:21.394591 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:36:21.394603 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:36:21.394666 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:36:21.394683 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:36:21.394950 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:36:21.395007 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:36:21.395025 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:36:21.425706 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:36:21.425759 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:36:31.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:21.425802 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:36:21.425827 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:36:21.425846 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:36:21.425859 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:36:29.604559 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:29.604847 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:29.604961 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:29.605126 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:29.605175 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:36:29.605228 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-17T00:36:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:36:31.368119 systemd[1]: Started systemd-journald.service. May 17 00:36:31.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.370442 systemd[1]: Finished systemd-modules-load.service. May 17 00:36:31.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.373132 systemd[1]: Finished systemd-network-generator.service. May 17 00:36:31.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.375862 systemd[1]: Finished systemd-remount-fs.service. May 17 00:36:31.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.378893 systemd[1]: Reached target network-pre.target. May 17 00:36:31.382630 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:36:31.386033 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:36:31.388513 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:36:31.420221 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:36:31.423725 systemd[1]: Starting systemd-journal-flush.service... May 17 00:36:31.425784 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:36:31.426895 systemd[1]: Starting systemd-random-seed.service... May 17 00:36:31.428977 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:36:31.430144 systemd[1]: Starting systemd-sysctl.service... May 17 00:36:31.433114 systemd[1]: Starting systemd-sysusers.service... May 17 00:36:31.438016 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:36:31.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.440393 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:36:31.442472 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:36:31.445599 systemd[1]: Starting systemd-udev-settle.service... May 17 00:36:31.488034 systemd[1]: Finished systemd-random-seed.service. May 17 00:36:31.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.492632 systemd-journald[1127]: Time spent on flushing to /var/log/journal/1ae0ccc6f7824e0e985f482ad0047475 is 22.335ms for 1143 entries. May 17 00:36:31.492632 systemd-journald[1127]: System Journal (/var/log/journal/1ae0ccc6f7824e0e985f482ad0047475) is 8.0M, max 2.6G, 2.6G free. May 17 00:36:31.578143 systemd-journald[1127]: Received client request to flush runtime journal. May 17 00:36:31.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:31.491062 systemd[1]: Finished systemd-sysctl.service. May 17 00:36:31.578400 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:36:31.496633 systemd[1]: Reached target first-boot-complete.target. May 17 00:36:31.579333 systemd[1]: Finished systemd-journal-flush.service. May 17 00:36:31.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:32.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:32.251337 systemd[1]: Finished systemd-sysusers.service. May 17 00:36:32.985905 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:36:32.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:32.988000 audit: BPF prog-id=24 op=LOAD May 17 00:36:32.988000 audit: BPF prog-id=25 op=LOAD May 17 00:36:32.988000 audit: BPF prog-id=7 op=UNLOAD May 17 00:36:32.988000 audit: BPF prog-id=8 op=UNLOAD May 17 00:36:32.990010 systemd[1]: Starting systemd-udevd.service... May 17 00:36:33.007304 systemd-udevd[1161]: Using default interface naming scheme 'v252'. May 17 00:36:33.236665 systemd[1]: Started systemd-udevd.service. May 17 00:36:33.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:33.239000 audit: BPF prog-id=26 op=LOAD May 17 00:36:33.241648 systemd[1]: Starting systemd-networkd.service... May 17 00:36:33.274645 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:36:33.336000 audit: BPF prog-id=27 op=LOAD May 17 00:36:33.336000 audit: BPF prog-id=28 op=LOAD May 17 00:36:33.336000 audit: BPF prog-id=29 op=LOAD May 17 00:36:33.337940 systemd[1]: Starting systemd-userdbd.service... May 17 00:36:33.353791 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:36:33.402449 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:36:33.402540 kernel: hv_vmbus: registering driver hv_utils May 17 00:36:33.345000 audit[1177]: AVC avc: denied { confidentiality } for pid=1177 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:36:33.411783 kernel: hv_vmbus: registering driver hv_balloon May 17 00:36:33.411859 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:36:33.433496 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:36:33.433577 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:36:33.433614 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:36:33.433634 kernel: Console: switching to colour dummy device 80x25 May 17 00:36:33.433651 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:36:33.433667 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:36:33.433684 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:36:34.221066 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:36:33.345000 audit[1177]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5606ec2a5620 a1=f884 a2=7fa1b5c7cbc5 a3=5 items=12 ppid=1161 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:33.345000 audit: CWD cwd="/" May 17 00:36:33.345000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=1 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=2 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=3 name=(null) inode=14999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=4 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=5 name=(null) inode=15000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=6 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=7 name=(null) inode=15001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=8 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=9 name=(null) inode=15002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=10 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PATH item=11 name=(null) inode=15003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:33.345000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:36:34.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:34.268534 systemd[1]: Started systemd-userdbd.service. May 17 00:36:34.498173 kernel: KVM: vmx: using Hyper-V Enlightened VMCS May 17 00:36:34.527154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:36:34.586919 systemd-networkd[1167]: lo: Link UP May 17 00:36:34.586931 systemd-networkd[1167]: lo: Gained carrier May 17 00:36:34.587522 systemd-networkd[1167]: Enumeration completed May 17 00:36:34.587637 systemd[1]: Started systemd-networkd.service. May 17 00:36:34.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:34.591303 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:36:34.626173 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:36:34.633521 systemd[1]: Finished systemd-udev-settle.service. May 17 00:36:34.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:34.637096 systemd[1]: Starting lvm2-activation-early.service... May 17 00:36:34.681172 kernel: mlx5_core febb:00:02.0 enP65211s1: Link up May 17 00:36:34.701169 kernel: hv_netvsc 7c1e5202-de5b-7c1e-5202-de5b7c1e5202 eth0: Data path switched to VF: enP65211s1 May 17 00:36:34.701664 systemd-networkd[1167]: enP65211s1: Link UP May 17 00:36:34.701916 systemd-networkd[1167]: eth0: Link UP May 17 00:36:34.701989 systemd-networkd[1167]: eth0: Gained carrier May 17 00:36:34.707548 systemd-networkd[1167]: enP65211s1: Gained carrier May 17 00:36:34.739306 systemd-networkd[1167]: eth0: DHCPv4 address 10.200.4.30/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:36:35.092168 lvm[1240]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:36:35.123398 systemd[1]: Finished lvm2-activation-early.service. May 17 00:36:35.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:35.126180 systemd[1]: Reached target cryptsetup.target. May 17 00:36:35.129838 systemd[1]: Starting lvm2-activation.service... May 17 00:36:35.134415 lvm[1241]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:36:35.154094 systemd[1]: Finished lvm2-activation.service. May 17 00:36:35.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:35.156502 systemd[1]: Reached target local-fs-pre.target. May 17 00:36:35.158476 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:36:35.158512 systemd[1]: Reached target local-fs.target. May 17 00:36:35.160515 systemd[1]: Reached target machines.target. May 17 00:36:35.164022 systemd[1]: Starting ldconfig.service... May 17 00:36:35.183029 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:35.183100 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:35.184275 systemd[1]: Starting systemd-boot-update.service... May 17 00:36:35.187704 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:36:35.191050 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:36:35.194374 systemd[1]: Starting systemd-sysext.service... May 17 00:36:35.260585 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:36:35.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:35.302114 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1243 (bootctl) May 17 00:36:35.303730 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:36:35.320935 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:36:35.511577 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:36:35.511851 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:36:35.837170 kernel: loop0: detected capacity change from 0 to 229808 May 17 00:36:35.939748 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:36:35.940409 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:36:35.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:35.960161 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:36:35.977165 kernel: loop1: detected capacity change from 0 to 229808 May 17 00:36:35.983722 (sd-sysext)[1255]: Using extensions 'kubernetes'. May 17 00:36:35.984167 (sd-sysext)[1255]: Merged extensions into '/usr'. May 17 00:36:35.992964 systemd-networkd[1167]: eth0: Gained IPv6LL May 17 00:36:36.001029 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:36:36.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.003900 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:36.005456 systemd[1]: Mounting usr-share-oem.mount... May 17 00:36:36.007936 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:36.011683 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:36.014915 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:36:36.018167 systemd[1]: Starting modprobe@loop.service... May 17 00:36:36.020076 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:36.020315 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:36.020490 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:36.023234 systemd[1]: Mounted usr-share-oem.mount. May 17 00:36:36.025902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:36.026060 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:36.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.028806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:36:36.028956 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:36:36.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.031721 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:36.031868 systemd[1]: Finished modprobe@loop.service. May 17 00:36:36.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.034520 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:36:36.034673 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:36:36.035877 systemd[1]: Finished systemd-sysext.service. May 17 00:36:36.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.039338 systemd[1]: Starting ensure-sysext.service... May 17 00:36:36.042680 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:36:36.049822 systemd[1]: Reloading. May 17 00:36:36.133640 /usr/lib/systemd/system-generators/torcx-generator[1282]: time="2025-05-17T00:36:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:36:36.137712 /usr/lib/systemd/system-generators/torcx-generator[1282]: time="2025-05-17T00:36:36Z" level=info msg="torcx already run" May 17 00:36:36.209440 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:36:36.216123 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:36:36.216153 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:36:36.232595 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:36:36.297000 audit: BPF prog-id=30 op=LOAD May 17 00:36:36.297000 audit: BPF prog-id=26 op=UNLOAD May 17 00:36:36.298000 audit: BPF prog-id=31 op=LOAD May 17 00:36:36.298000 audit: BPF prog-id=27 op=UNLOAD May 17 00:36:36.298000 audit: BPF prog-id=32 op=LOAD May 17 00:36:36.298000 audit: BPF prog-id=33 op=LOAD May 17 00:36:36.298000 audit: BPF prog-id=28 op=UNLOAD May 17 00:36:36.298000 audit: BPF prog-id=29 op=UNLOAD May 17 00:36:36.299000 audit: BPF prog-id=34 op=LOAD May 17 00:36:36.299000 audit: BPF prog-id=35 op=LOAD May 17 00:36:36.299000 audit: BPF prog-id=24 op=UNLOAD May 17 00:36:36.299000 audit: BPF prog-id=25 op=UNLOAD May 17 00:36:36.300000 audit: BPF prog-id=36 op=LOAD May 17 00:36:36.300000 audit: BPF prog-id=21 op=UNLOAD May 17 00:36:36.300000 audit: BPF prog-id=37 op=LOAD May 17 00:36:36.300000 audit: BPF prog-id=38 op=LOAD May 17 00:36:36.300000 audit: BPF prog-id=22 op=UNLOAD May 17 00:36:36.300000 audit: BPF prog-id=23 op=UNLOAD May 17 00:36:36.315313 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:36.315628 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:36.317038 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:36.319583 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:36:36.322591 systemd[1]: Starting modprobe@loop.service... May 17 00:36:36.323495 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:36.323704 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:36.323912 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:36.325528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:36.325735 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:36.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.329318 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:36.329478 systemd[1]: Finished modprobe@loop.service. May 17 00:36:36.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.334622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:36:36.334769 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:36:36.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.336131 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:36.336422 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:36.337933 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:36.340706 systemd[1]: Starting modprobe@drm.service... May 17 00:36:36.342329 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:36:36.343753 systemd[1]: Starting modprobe@loop.service... May 17 00:36:36.344824 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:36.345025 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:36.345231 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:36:36.345456 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:36.347307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:36.347467 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:36.349095 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:36.349218 systemd[1]: Finished modprobe@loop.service. May 17 00:36:36.350691 systemd[1]: Finished ensure-sysext.service. May 17 00:36:36.351996 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:36:36.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.354297 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:36:36.354438 systemd[1]: Finished modprobe@drm.service. May 17 00:36:36.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.548692 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:36:36.879304 systemd-fsck[1252]: fsck.fat 4.2 (2021-01-31) May 17 00:36:36.879304 systemd-fsck[1252]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:36:36.881564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:36:36.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.886113 systemd[1]: Mounting boot.mount... May 17 00:36:36.890740 kernel: kauditd_printk_skb: 120 callbacks suppressed May 17 00:36:36.890825 kernel: audit: type=1130 audit(1747442196.883:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.917639 systemd[1]: Mounted boot.mount. May 17 00:36:36.933063 systemd[1]: Finished systemd-boot-update.service. May 17 00:36:36.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.953181 kernel: audit: type=1130 audit(1747442196.934:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.337072 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:36:37.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.341086 systemd[1]: Starting audit-rules.service... May 17 00:36:37.351656 kernel: audit: type=1130 audit(1747442197.339:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.352950 systemd[1]: Starting clean-ca-certificates.service... May 17 00:36:37.356434 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:36:37.359000 audit: BPF prog-id=39 op=LOAD May 17 00:36:37.361587 systemd[1]: Starting systemd-resolved.service... May 17 00:36:37.365572 kernel: audit: type=1334 audit(1747442197.359:207): prog-id=39 op=LOAD May 17 00:36:37.365000 audit: BPF prog-id=40 op=LOAD May 17 00:36:37.368182 systemd[1]: Starting systemd-timesyncd.service... May 17 00:36:37.372702 kernel: audit: type=1334 audit(1747442197.365:208): prog-id=40 op=LOAD May 17 00:36:37.374171 systemd[1]: Starting systemd-update-utmp.service... May 17 00:36:37.385000 audit[1362]: SYSTEM_BOOT pid=1362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:36:37.388401 systemd[1]: Finished systemd-update-utmp.service. May 17 00:36:37.400035 kernel: audit: type=1127 audit(1747442197.385:209): pid=1362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:36:37.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.410333 kernel: audit: type=1130 audit(1747442197.399:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.425528 systemd[1]: Finished clean-ca-certificates.service. May 17 00:36:37.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.441315 kernel: audit: type=1130 audit(1747442197.427:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.427817 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:36:37.536630 systemd[1]: Started systemd-timesyncd.service. May 17 00:36:37.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.538845 systemd[1]: Reached target time-set.target. May 17 00:36:37.552078 kernel: audit: type=1130 audit(1747442197.538:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.581354 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:36:37.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.585237 systemd-resolved[1360]: Positive Trust Anchors: May 17 00:36:37.585553 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:36:37.585645 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:36:37.597328 kernel: audit: type=1130 audit(1747442197.582:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:37.691137 systemd-timesyncd[1361]: Contacted time server 77.104.162.218:123 (0.flatcar.pool.ntp.org). May 17 00:36:37.691329 systemd-timesyncd[1361]: Initial clock synchronization to Sat 2025-05-17 00:36:37.692567 UTC. May 17 00:36:37.701000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:36:37.701000 audit[1377]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdaf5b3600 a2=420 a3=0 items=0 ppid=1356 pid=1377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:37.701000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:36:37.702652 augenrules[1377]: No rules May 17 00:36:37.703214 systemd[1]: Finished audit-rules.service. May 17 00:36:37.734773 systemd-resolved[1360]: Using system hostname 'ci-3510.3.7-n-51492a5456'. May 17 00:36:37.736481 systemd[1]: Started systemd-resolved.service. May 17 00:36:37.738675 systemd[1]: Reached target network.target. May 17 00:36:37.740343 systemd[1]: Reached target network-online.target. May 17 00:36:37.742298 systemd[1]: Reached target nss-lookup.target. May 17 00:36:43.927892 ldconfig[1242]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:36:43.937822 systemd[1]: Finished ldconfig.service. May 17 00:36:43.941798 systemd[1]: Starting systemd-update-done.service... May 17 00:36:43.948469 systemd[1]: Finished systemd-update-done.service. May 17 00:36:43.950708 systemd[1]: Reached target sysinit.target. May 17 00:36:43.952506 systemd[1]: Started motdgen.path. May 17 00:36:43.954123 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:36:43.956669 systemd[1]: Started logrotate.timer. May 17 00:36:43.958258 systemd[1]: Started mdadm.timer. May 17 00:36:43.959673 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:36:43.961751 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:36:43.961796 systemd[1]: Reached target paths.target. May 17 00:36:43.963497 systemd[1]: Reached target timers.target. May 17 00:36:43.965732 systemd[1]: Listening on dbus.socket. May 17 00:36:43.968298 systemd[1]: Starting docker.socket... May 17 00:36:44.011910 systemd[1]: Listening on sshd.socket. May 17 00:36:44.014385 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:44.014927 systemd[1]: Listening on docker.socket. May 17 00:36:44.016695 systemd[1]: Reached target sockets.target. May 17 00:36:44.018356 systemd[1]: Reached target basic.target. May 17 00:36:44.019998 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:36:44.020041 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:36:44.021039 systemd[1]: Starting containerd.service... May 17 00:36:44.024053 systemd[1]: Starting dbus.service... May 17 00:36:44.026748 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:36:44.029587 systemd[1]: Starting extend-filesystems.service... May 17 00:36:44.031441 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:36:44.033044 systemd[1]: Starting kubelet.service... May 17 00:36:44.037274 systemd[1]: Starting motdgen.service... May 17 00:36:44.040136 systemd[1]: Started nvidia.service. May 17 00:36:44.043101 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:36:44.050181 systemd[1]: Starting sshd-keygen.service... May 17 00:36:44.055196 systemd[1]: Starting systemd-logind.service... May 17 00:36:44.059781 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:44.059893 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:36:44.060506 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:36:44.061372 systemd[1]: Starting update-engine.service... May 17 00:36:44.066221 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:36:44.072549 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:36:44.072781 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:36:44.116216 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:36:44.116445 systemd[1]: Finished motdgen.service. May 17 00:36:44.124857 jq[1387]: false May 17 00:36:44.125173 jq[1404]: true May 17 00:36:44.126660 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:36:44.126912 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:36:44.148882 jq[1413]: true May 17 00:36:44.170130 extend-filesystems[1388]: Found loop1 May 17 00:36:44.170130 extend-filesystems[1388]: Found sda May 17 00:36:44.170130 extend-filesystems[1388]: Found sda1 May 17 00:36:44.170130 extend-filesystems[1388]: Found sda2 May 17 00:36:44.170130 extend-filesystems[1388]: Found sda3 May 17 00:36:44.170130 extend-filesystems[1388]: Found usr May 17 00:36:44.170130 extend-filesystems[1388]: Found sda4 May 17 00:36:44.187807 extend-filesystems[1388]: Found sda6 May 17 00:36:44.187807 extend-filesystems[1388]: Found sda7 May 17 00:36:44.187807 extend-filesystems[1388]: Found sda9 May 17 00:36:44.187807 extend-filesystems[1388]: Checking size of /dev/sda9 May 17 00:36:44.222703 env[1411]: time="2025-05-17T00:36:44.215566077Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:36:44.200093 systemd-logind[1399]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:36:44.206280 systemd-logind[1399]: New seat seat0. May 17 00:36:44.269392 env[1411]: time="2025-05-17T00:36:44.269349341Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:36:44.269674 env[1411]: time="2025-05-17T00:36:44.269654361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:44.271310 env[1411]: time="2025-05-17T00:36:44.271273268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:36:44.271426 env[1411]: time="2025-05-17T00:36:44.271410078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:44.271744 env[1411]: time="2025-05-17T00:36:44.271721298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:36:44.271821 env[1411]: time="2025-05-17T00:36:44.271807704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:36:44.271895 env[1411]: time="2025-05-17T00:36:44.271877309Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:36:44.271961 env[1411]: time="2025-05-17T00:36:44.271948113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:36:44.272097 env[1411]: time="2025-05-17T00:36:44.272082922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:44.272421 env[1411]: time="2025-05-17T00:36:44.272401743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:44.272674 env[1411]: time="2025-05-17T00:36:44.272651960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:36:44.272758 env[1411]: time="2025-05-17T00:36:44.272744666Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:36:44.272875 env[1411]: time="2025-05-17T00:36:44.272854773Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:36:44.272944 env[1411]: time="2025-05-17T00:36:44.272929878Z" level=info msg="metadata content store policy set" policy=shared May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.318995031Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319050334Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319076236Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319121239Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319166942Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319187743Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319205745Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319226846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319247847Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319268549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319285250Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319302051Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319427459Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:36:44.321164 env[1411]: time="2025-05-17T00:36:44.319520865Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319758981Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319808485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319827886Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319893990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319911891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319930393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319947994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319965895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.319985596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.320001797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.320017498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.320036100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.320183909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.320204011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:36:44.321754 env[1411]: time="2025-05-17T00:36:44.320220912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:36:44.322264 env[1411]: time="2025-05-17T00:36:44.320235013Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:36:44.322264 env[1411]: time="2025-05-17T00:36:44.320253514Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:36:44.322264 env[1411]: time="2025-05-17T00:36:44.320270815Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:36:44.322264 env[1411]: time="2025-05-17T00:36:44.320295217Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:36:44.322264 env[1411]: time="2025-05-17T00:36:44.320337120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:36:44.322446 env[1411]: time="2025-05-17T00:36:44.320600437Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:36:44.322446 env[1411]: time="2025-05-17T00:36:44.320683843Z" level=info msg="Connect containerd service" May 17 00:36:44.322446 env[1411]: time="2025-05-17T00:36:44.320726545Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.323115904Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.323437825Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.323485528Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.329536229Z" level=info msg="Start subscribing containerd event" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.329601433Z" level=info msg="Start recovering state" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.329687039Z" level=info msg="Start event monitor" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.329706540Z" level=info msg="Start snapshots syncer" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.329720141Z" level=info msg="Start cni network conf syncer for default" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.329736342Z" level=info msg="Start streaming server" May 17 00:36:44.371300 env[1411]: time="2025-05-17T00:36:44.329928455Z" level=info msg="containerd successfully booted in 0.116497s" May 17 00:36:44.371709 extend-filesystems[1388]: Old size kept for /dev/sda9 May 17 00:36:44.371709 extend-filesystems[1388]: Found sr0 May 17 00:36:44.323627 systemd[1]: Started containerd.service. May 17 00:36:44.377791 bash[1436]: Updated "/home/core/.ssh/authorized_keys" May 17 00:36:44.343190 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:36:44.343400 systemd[1]: Finished extend-filesystems.service. May 17 00:36:44.378788 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:36:44.425849 dbus-daemon[1386]: [system] SELinux support is enabled May 17 00:36:44.426017 systemd[1]: Started dbus.service. May 17 00:36:44.430663 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:36:44.430702 systemd[1]: Reached target system-config.target. May 17 00:36:44.432931 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:36:44.432967 systemd[1]: Reached target user-config.target. May 17 00:36:44.441825 systemd[1]: Started systemd-logind.service. May 17 00:36:44.560890 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:36:45.171862 update_engine[1402]: I0517 00:36:45.170517 1402 main.cc:92] Flatcar Update Engine starting May 17 00:36:45.236913 systemd[1]: Started update-engine.service. May 17 00:36:45.242668 update_engine[1402]: I0517 00:36:45.239480 1402 update_check_scheduler.cc:74] Next update check in 8m14s May 17 00:36:45.241655 systemd[1]: Started locksmithd.service. May 17 00:36:45.277903 sshd_keygen[1403]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:36:45.302006 systemd[1]: Finished sshd-keygen.service. May 17 00:36:45.305792 systemd[1]: Starting issuegen.service... May 17 00:36:45.309155 systemd[1]: Started waagent.service. May 17 00:36:45.316959 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:36:45.317168 systemd[1]: Finished issuegen.service. May 17 00:36:45.320343 systemd[1]: Starting systemd-user-sessions.service... May 17 00:36:45.347352 systemd[1]: Finished systemd-user-sessions.service. May 17 00:36:45.351445 systemd[1]: Started getty@tty1.service. May 17 00:36:45.355174 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:36:45.357581 systemd[1]: Reached target getty.target. May 17 00:36:45.417650 systemd[1]: Started kubelet.service. May 17 00:36:45.420522 systemd[1]: Reached target multi-user.target. May 17 00:36:45.433904 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:36:45.442512 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:36:45.442723 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:36:45.445367 systemd[1]: Startup finished in 884ms (firmware) + 32.547s (loader) + 903ms (kernel) + 13.455s (initrd) + 26.483s (userspace) = 1min 14.274s. May 17 00:36:45.946645 login[1499]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:36:45.949380 login[1500]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:36:45.990534 systemd[1]: Created slice user-500.slice. May 17 00:36:45.992074 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:36:46.004956 systemd-logind[1399]: New session 2 of user core. May 17 00:36:46.009382 systemd-logind[1399]: New session 1 of user core. May 17 00:36:46.014180 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:36:46.015991 systemd[1]: Starting user@500.service... May 17 00:36:46.021306 (systemd)[1514]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:46.045189 kubelet[1503]: E0517 00:36:46.045158 1503 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:46.046891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:46.047001 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:46.047315 systemd[1]: kubelet.service: Consumed 1.060s CPU time. May 17 00:36:46.204392 systemd[1514]: Queued start job for default target default.target. May 17 00:36:46.204983 systemd[1514]: Reached target paths.target. May 17 00:36:46.205011 systemd[1514]: Reached target sockets.target. May 17 00:36:46.205027 systemd[1514]: Reached target timers.target. May 17 00:36:46.205043 systemd[1514]: Reached target basic.target. May 17 00:36:46.205170 systemd[1]: Started user@500.service. May 17 00:36:46.206307 systemd[1]: Started session-1.scope. May 17 00:36:46.207098 systemd[1]: Started session-2.scope. May 17 00:36:46.208043 systemd[1514]: Reached target default.target. May 17 00:36:46.208249 systemd[1514]: Startup finished in 177ms. May 17 00:36:47.092199 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:36:52.643868 waagent[1494]: 2025-05-17T00:36:52.643746Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:36:52.671542 waagent[1494]: 2025-05-17T00:36:52.661412Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:36:52.671542 waagent[1494]: 2025-05-17T00:36:52.662457Z INFO Daemon Daemon Python: 3.9.16 May 17 00:36:52.671542 waagent[1494]: 2025-05-17T00:36:52.663593Z INFO Daemon Daemon Run daemon May 17 00:36:52.671542 waagent[1494]: 2025-05-17T00:36:52.664634Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:36:52.677955 waagent[1494]: 2025-05-17T00:36:52.677838Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:36:52.680412 waagent[1494]: 2025-05-17T00:36:52.680311Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:36:52.681261 waagent[1494]: 2025-05-17T00:36:52.681207Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:36:52.681933 waagent[1494]: 2025-05-17T00:36:52.681883Z INFO Daemon Daemon Using waagent for provisioning May 17 00:36:52.683215 waagent[1494]: 2025-05-17T00:36:52.683161Z INFO Daemon Daemon Activate resource disk May 17 00:36:52.684280 waagent[1494]: 2025-05-17T00:36:52.684231Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:36:52.691928 waagent[1494]: 2025-05-17T00:36:52.691867Z INFO Daemon Daemon Found device: None May 17 00:36:52.692804 waagent[1494]: 2025-05-17T00:36:52.692750Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:36:52.693622 waagent[1494]: 2025-05-17T00:36:52.693573Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:36:52.695230 waagent[1494]: 2025-05-17T00:36:52.695175Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:36:52.696835 waagent[1494]: 2025-05-17T00:36:52.696784Z INFO Daemon Daemon Running default provisioning handler May 17 00:36:52.706184 waagent[1494]: 2025-05-17T00:36:52.706060Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:36:52.709021 waagent[1494]: 2025-05-17T00:36:52.708915Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:36:52.709810 waagent[1494]: 2025-05-17T00:36:52.709754Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:36:52.710528 waagent[1494]: 2025-05-17T00:36:52.710478Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:36:52.790313 waagent[1494]: 2025-05-17T00:36:52.790129Z INFO Daemon Daemon Successfully mounted dvd May 17 00:36:52.878417 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:36:52.900212 waagent[1494]: 2025-05-17T00:36:52.899976Z INFO Daemon Daemon Detect protocol endpoint May 17 00:36:52.913178 waagent[1494]: 2025-05-17T00:36:52.900622Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:36:52.913178 waagent[1494]: 2025-05-17T00:36:52.901630Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:36:52.913178 waagent[1494]: 2025-05-17T00:36:52.902309Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:36:52.913178 waagent[1494]: 2025-05-17T00:36:52.903287Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:36:52.913178 waagent[1494]: 2025-05-17T00:36:52.904690Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:36:53.023092 waagent[1494]: 2025-05-17T00:36:53.023009Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:36:53.027227 waagent[1494]: 2025-05-17T00:36:53.027178Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:36:53.030102 waagent[1494]: 2025-05-17T00:36:53.030039Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:36:53.706495 waagent[1494]: 2025-05-17T00:36:53.706345Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:36:53.718366 waagent[1494]: 2025-05-17T00:36:53.718284Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:36:53.721312 waagent[1494]: 2025-05-17T00:36:53.721246Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:36:53.793934 waagent[1494]: 2025-05-17T00:36:53.793783Z INFO Daemon Daemon Found private key matching thumbprint 6E6C18590A7DEB884AEB0BE29B979EA0819609BD May 17 00:36:53.799952 waagent[1494]: 2025-05-17T00:36:53.794491Z INFO Daemon Daemon Fetch goal state completed May 17 00:36:53.833818 waagent[1494]: 2025-05-17T00:36:53.833716Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: d48fdc85-6c12-4e17-be61-da041298f6fb New eTag: 2989912556752438092] May 17 00:36:53.841082 waagent[1494]: 2025-05-17T00:36:53.834777Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:36:53.847972 waagent[1494]: 2025-05-17T00:36:53.847898Z INFO Daemon Daemon Starting provisioning May 17 00:36:53.854382 waagent[1494]: 2025-05-17T00:36:53.848270Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:36:53.854382 waagent[1494]: 2025-05-17T00:36:53.849162Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-51492a5456] May 17 00:36:53.876949 waagent[1494]: 2025-05-17T00:36:53.876808Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-51492a5456] May 17 00:36:53.884108 waagent[1494]: 2025-05-17T00:36:53.877671Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:36:53.884108 waagent[1494]: 2025-05-17T00:36:53.878580Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:36:53.892178 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:36:53.892434 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:36:53.892509 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:36:53.892869 systemd[1]: Stopping systemd-networkd.service... May 17 00:36:53.898200 systemd-networkd[1167]: eth0: DHCPv6 lease lost May 17 00:36:53.899514 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:36:53.899708 systemd[1]: Stopped systemd-networkd.service. May 17 00:36:53.901999 systemd[1]: Starting systemd-networkd.service... May 17 00:36:53.933541 systemd-networkd[1556]: enP65211s1: Link UP May 17 00:36:53.933552 systemd-networkd[1556]: enP65211s1: Gained carrier May 17 00:36:53.934877 systemd-networkd[1556]: eth0: Link UP May 17 00:36:53.934886 systemd-networkd[1556]: eth0: Gained carrier May 17 00:36:53.935346 systemd-networkd[1556]: lo: Link UP May 17 00:36:53.935355 systemd-networkd[1556]: lo: Gained carrier May 17 00:36:53.935668 systemd-networkd[1556]: eth0: Gained IPv6LL May 17 00:36:53.935937 systemd-networkd[1556]: Enumeration completed May 17 00:36:53.936054 systemd[1]: Started systemd-networkd.service. May 17 00:36:53.938264 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:36:53.940411 systemd-networkd[1556]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:36:53.943911 waagent[1494]: 2025-05-17T00:36:53.943508Z INFO Daemon Daemon Create user account if not exists May 17 00:36:53.948704 waagent[1494]: 2025-05-17T00:36:53.948608Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:36:53.951524 waagent[1494]: 2025-05-17T00:36:53.951442Z INFO Daemon Daemon Configure sudoer May 17 00:36:53.954576 waagent[1494]: 2025-05-17T00:36:53.954504Z INFO Daemon Daemon Configure sshd May 17 00:36:53.956801 waagent[1494]: 2025-05-17T00:36:53.956704Z INFO Daemon Daemon Deploy ssh public key. May 17 00:36:53.991229 systemd-networkd[1556]: eth0: DHCPv4 address 10.200.4.30/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:36:53.994704 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:36:55.054626 waagent[1494]: 2025-05-17T00:36:55.054530Z INFO Daemon Daemon Provisioning complete May 17 00:36:55.068496 waagent[1494]: 2025-05-17T00:36:55.068413Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:36:55.071673 waagent[1494]: 2025-05-17T00:36:55.071598Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:36:55.076621 waagent[1494]: 2025-05-17T00:36:55.076550Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:36:55.344025 waagent[1562]: 2025-05-17T00:36:55.343847Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:36:55.344763 waagent[1562]: 2025-05-17T00:36:55.344696Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:55.344912 waagent[1562]: 2025-05-17T00:36:55.344855Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:55.355901 waagent[1562]: 2025-05-17T00:36:55.355819Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:36:55.356070 waagent[1562]: 2025-05-17T00:36:55.356015Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:36:55.407702 waagent[1562]: 2025-05-17T00:36:55.407577Z INFO ExtHandler ExtHandler Found private key matching thumbprint 6E6C18590A7DEB884AEB0BE29B979EA0819609BD May 17 00:36:55.408006 waagent[1562]: 2025-05-17T00:36:55.407946Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:36:55.421278 waagent[1562]: 2025-05-17T00:36:55.421209Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: ac92a3c8-82d7-45f3-8e7e-7bf3d95420fd New eTag: 2989912556752438092] May 17 00:36:55.421810 waagent[1562]: 2025-05-17T00:36:55.421748Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:36:55.554347 waagent[1562]: 2025-05-17T00:36:55.554199Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:36:55.578950 waagent[1562]: 2025-05-17T00:36:55.578853Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1562 May 17 00:36:55.582361 waagent[1562]: 2025-05-17T00:36:55.582289Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:36:55.583555 waagent[1562]: 2025-05-17T00:36:55.583493Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:36:55.672944 waagent[1562]: 2025-05-17T00:36:55.672877Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:36:55.673372 waagent[1562]: 2025-05-17T00:36:55.673309Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:36:55.681349 waagent[1562]: 2025-05-17T00:36:55.681293Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:36:55.681838 waagent[1562]: 2025-05-17T00:36:55.681778Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:36:55.682927 waagent[1562]: 2025-05-17T00:36:55.682863Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:36:55.684253 waagent[1562]: 2025-05-17T00:36:55.684192Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:36:55.684687 waagent[1562]: 2025-05-17T00:36:55.684630Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:55.684842 waagent[1562]: 2025-05-17T00:36:55.684791Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:55.685396 waagent[1562]: 2025-05-17T00:36:55.685338Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:36:55.685804 waagent[1562]: 2025-05-17T00:36:55.685749Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:36:55.686482 waagent[1562]: 2025-05-17T00:36:55.686423Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:55.686616 waagent[1562]: 2025-05-17T00:36:55.686559Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:36:55.686616 waagent[1562]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:36:55.686616 waagent[1562]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:36:55.686616 waagent[1562]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:36:55.686616 waagent[1562]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:55.686616 waagent[1562]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:55.686616 waagent[1562]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:55.687011 waagent[1562]: 2025-05-17T00:36:55.686962Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:55.688042 waagent[1562]: 2025-05-17T00:36:55.687984Z INFO EnvHandler ExtHandler Configure routes May 17 00:36:55.688272 waagent[1562]: 2025-05-17T00:36:55.688213Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:36:55.690191 waagent[1562]: 2025-05-17T00:36:55.689922Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:36:55.691214 waagent[1562]: 2025-05-17T00:36:55.691133Z INFO EnvHandler ExtHandler Gateway:None May 17 00:36:55.691461 waagent[1562]: 2025-05-17T00:36:55.691408Z INFO EnvHandler ExtHandler Routes:None May 17 00:36:55.693486 waagent[1562]: 2025-05-17T00:36:55.693423Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:36:55.693837 waagent[1562]: 2025-05-17T00:36:55.693775Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:36:55.695665 waagent[1562]: 2025-05-17T00:36:55.695602Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:36:55.704440 waagent[1562]: 2025-05-17T00:36:55.704379Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:36:55.706769 waagent[1562]: 2025-05-17T00:36:55.706710Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:36:55.707631 waagent[1562]: 2025-05-17T00:36:55.707569Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:36:55.737599 waagent[1562]: 2025-05-17T00:36:55.737485Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1556' May 17 00:36:55.750492 waagent[1562]: 2025-05-17T00:36:55.750436Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:36:55.873738 waagent[1562]: 2025-05-17T00:36:55.873606Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:36:55.873738 waagent[1562]: Executing ['ip', '-a', '-o', 'link']: May 17 00:36:55.873738 waagent[1562]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:36:55.873738 waagent[1562]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:02:de:5b brd ff:ff:ff:ff:ff:ff May 17 00:36:55.873738 waagent[1562]: 3: enP65211s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:02:de:5b brd ff:ff:ff:ff:ff:ff\ altname enP65211p0s2 May 17 00:36:55.873738 waagent[1562]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:36:55.873738 waagent[1562]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:36:55.873738 waagent[1562]: 2: eth0 inet 10.200.4.30/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:36:55.873738 waagent[1562]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:36:55.873738 waagent[1562]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:36:55.873738 waagent[1562]: 2: eth0 inet6 fe80::7e1e:52ff:fe02:de5b/64 scope link \ valid_lft forever preferred_lft forever May 17 00:36:56.024598 waagent[1562]: 2025-05-17T00:36:56.024458Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:36:56.076072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:36:56.076338 systemd[1]: Stopped kubelet.service. May 17 00:36:56.076395 systemd[1]: kubelet.service: Consumed 1.060s CPU time. May 17 00:36:56.078061 systemd[1]: Starting kubelet.service... May 17 00:36:56.079737 waagent[1494]: 2025-05-17T00:36:56.079559Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:36:56.086298 waagent[1494]: 2025-05-17T00:36:56.086220Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:36:56.266234 systemd[1]: Started kubelet.service. May 17 00:36:56.870543 kubelet[1596]: E0517 00:36:56.870494 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:56.877231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:56.877408 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:57.867113 waagent[1593]: 2025-05-17T00:36:57.866999Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:36:57.868504 waagent[1593]: 2025-05-17T00:36:57.868432Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:36:57.868657 waagent[1593]: 2025-05-17T00:36:57.868600Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:36:57.868809 waagent[1593]: 2025-05-17T00:36:57.868761Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 17 00:36:57.884588 waagent[1593]: 2025-05-17T00:36:57.884472Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:36:57.885009 waagent[1593]: 2025-05-17T00:36:57.884947Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:57.885189 waagent[1593]: 2025-05-17T00:36:57.885124Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:57.885420 waagent[1593]: 2025-05-17T00:36:57.885369Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:36:57.897578 waagent[1593]: 2025-05-17T00:36:57.897495Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:36:57.906066 waagent[1593]: 2025-05-17T00:36:57.906004Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.166 May 17 00:36:57.907053 waagent[1593]: 2025-05-17T00:36:57.906989Z INFO ExtHandler May 17 00:36:57.907224 waagent[1593]: 2025-05-17T00:36:57.907170Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 05901148-5cdd-4635-bf9d-d5b047c62504 eTag: 2989912556752438092 source: Fabric] May 17 00:36:57.907927 waagent[1593]: 2025-05-17T00:36:57.907870Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:36:57.909028 waagent[1593]: 2025-05-17T00:36:57.908967Z INFO ExtHandler May 17 00:36:57.909176 waagent[1593]: 2025-05-17T00:36:57.909110Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:36:57.915908 waagent[1593]: 2025-05-17T00:36:57.915856Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:36:57.916362 waagent[1593]: 2025-05-17T00:36:57.916314Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:36:57.937391 waagent[1593]: 2025-05-17T00:36:57.937323Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:36:57.993668 waagent[1593]: 2025-05-17T00:36:57.993541Z INFO ExtHandler Downloaded certificate {'thumbprint': '6E6C18590A7DEB884AEB0BE29B979EA0819609BD', 'hasPrivateKey': True} May 17 00:36:57.994868 waagent[1593]: 2025-05-17T00:36:57.994797Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:36:57.995721 waagent[1593]: 2025-05-17T00:36:57.995658Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:36:58.013265 waagent[1593]: 2025-05-17T00:36:58.013156Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:36:58.021542 waagent[1593]: 2025-05-17T00:36:58.021444Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:36:58.025094 waagent[1593]: 2025-05-17T00:36:58.024998Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:36:58.025334 waagent[1593]: 2025-05-17T00:36:58.025281Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:36:58.147955 waagent[1593]: 2025-05-17T00:36:58.147831Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:36:58.147955 waagent[1593]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:36:58.147955 waagent[1593]: pkts bytes target prot opt in out source destination May 17 00:36:58.147955 waagent[1593]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:36:58.147955 waagent[1593]: pkts bytes target prot opt in out source destination May 17 00:36:58.147955 waagent[1593]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:36:58.147955 waagent[1593]: pkts bytes target prot opt in out source destination May 17 00:36:58.147955 waagent[1593]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:36:58.147955 waagent[1593]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:36:58.147955 waagent[1593]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:36:58.149090 waagent[1593]: 2025-05-17T00:36:58.149020Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:36:58.151779 waagent[1593]: 2025-05-17T00:36:58.151678Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:36:58.152035 waagent[1593]: 2025-05-17T00:36:58.151982Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:36:58.152403 waagent[1593]: 2025-05-17T00:36:58.152346Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:36:58.160847 waagent[1593]: 2025-05-17T00:36:58.160786Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:36:58.161372 waagent[1593]: 2025-05-17T00:36:58.161314Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:36:58.168820 waagent[1593]: 2025-05-17T00:36:58.168744Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1593 May 17 00:36:58.171784 waagent[1593]: 2025-05-17T00:36:58.171715Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:36:58.172580 waagent[1593]: 2025-05-17T00:36:58.172520Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:36:58.173426 waagent[1593]: 2025-05-17T00:36:58.173367Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:36:58.175884 waagent[1593]: 2025-05-17T00:36:58.175822Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:36:58.177151 waagent[1593]: 2025-05-17T00:36:58.177083Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:36:58.177613 waagent[1593]: 2025-05-17T00:36:58.177558Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:58.177768 waagent[1593]: 2025-05-17T00:36:58.177721Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:58.178320 waagent[1593]: 2025-05-17T00:36:58.178261Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:36:58.178754 waagent[1593]: 2025-05-17T00:36:58.178699Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:36:58.179420 waagent[1593]: 2025-05-17T00:36:58.179363Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:58.179848 waagent[1593]: 2025-05-17T00:36:58.179792Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:36:58.179848 waagent[1593]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:36:58.179848 waagent[1593]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:36:58.179848 waagent[1593]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:36:58.179848 waagent[1593]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:58.179848 waagent[1593]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:58.179848 waagent[1593]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:58.180135 waagent[1593]: 2025-05-17T00:36:58.179873Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:58.180363 waagent[1593]: 2025-05-17T00:36:58.180307Z INFO EnvHandler ExtHandler Configure routes May 17 00:36:58.180572 waagent[1593]: 2025-05-17T00:36:58.180522Z INFO EnvHandler ExtHandler Gateway:None May 17 00:36:58.180900 waagent[1593]: 2025-05-17T00:36:58.180843Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:36:58.181343 waagent[1593]: 2025-05-17T00:36:58.181287Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:36:58.181724 waagent[1593]: 2025-05-17T00:36:58.181672Z INFO EnvHandler ExtHandler Routes:None May 17 00:36:58.185891 waagent[1593]: 2025-05-17T00:36:58.185832Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:36:58.186253 waagent[1593]: 2025-05-17T00:36:58.186196Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:36:58.191036 waagent[1593]: 2025-05-17T00:36:58.190960Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:36:58.214549 waagent[1593]: 2025-05-17T00:36:58.214471Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:36:58.216129 waagent[1593]: 2025-05-17T00:36:58.216062Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:36:58.216328 waagent[1593]: 2025-05-17T00:36:58.216268Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:36:58.216328 waagent[1593]: Executing ['ip', '-a', '-o', 'link']: May 17 00:36:58.216328 waagent[1593]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:36:58.216328 waagent[1593]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:02:de:5b brd ff:ff:ff:ff:ff:ff May 17 00:36:58.216328 waagent[1593]: 3: enP65211s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:02:de:5b brd ff:ff:ff:ff:ff:ff\ altname enP65211p0s2 May 17 00:36:58.216328 waagent[1593]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:36:58.216328 waagent[1593]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:36:58.216328 waagent[1593]: 2: eth0 inet 10.200.4.30/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:36:58.216328 waagent[1593]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:36:58.216328 waagent[1593]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:36:58.216328 waagent[1593]: 2: eth0 inet6 fe80::7e1e:52ff:fe02:de5b/64 scope link \ valid_lft forever preferred_lft forever May 17 00:36:58.231473 waagent[1593]: 2025-05-17T00:36:58.231395Z INFO ExtHandler ExtHandler May 17 00:36:58.231694 waagent[1593]: 2025-05-17T00:36:58.231631Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 22f30390-fc9d-4c62-99fa-1d2ca1737b66 correlation 71d8dd71-d094-43c2-938a-7ea7dffdcd7d created: 2025-05-17T00:35:21.087082Z] May 17 00:36:58.234734 waagent[1593]: 2025-05-17T00:36:58.234666Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:36:58.238024 waagent[1593]: 2025-05-17T00:36:58.237830Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] May 17 00:36:58.264689 waagent[1593]: 2025-05-17T00:36:58.264606Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:36:58.270241 waagent[1593]: 2025-05-17T00:36:58.270125Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 62FA4154-AA0C-470C-B3B8-2A0E02A7CBCB;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:36:58.275409 waagent[1593]: 2025-05-17T00:36:58.275341Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:37:06.890746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:37:06.891060 systemd[1]: Stopped kubelet.service. May 17 00:37:06.893188 systemd[1]: Starting kubelet.service... May 17 00:37:06.989550 systemd[1]: Started kubelet.service. May 17 00:37:07.672424 kubelet[1645]: E0517 00:37:07.672370 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:37:07.674285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:37:07.674443 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:37:08.397184 systemd[1]: Created slice system-sshd.slice. May 17 00:37:08.399378 systemd[1]: Started sshd@0-10.200.4.30:22-10.200.16.10:58178.service. May 17 00:37:09.261647 sshd[1651]: Accepted publickey for core from 10.200.16.10 port 58178 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:09.263122 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:09.267469 systemd-logind[1399]: New session 3 of user core. May 17 00:37:09.268042 systemd[1]: Started session-3.scope. May 17 00:37:09.786929 systemd[1]: Started sshd@1-10.200.4.30:22-10.200.16.10:44554.service. May 17 00:37:10.380861 sshd[1656]: Accepted publickey for core from 10.200.16.10 port 44554 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:10.382501 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:10.387682 systemd[1]: Started session-4.scope. May 17 00:37:10.388127 systemd-logind[1399]: New session 4 of user core. May 17 00:37:10.815592 sshd[1656]: pam_unix(sshd:session): session closed for user core May 17 00:37:10.819078 systemd[1]: sshd@1-10.200.4.30:22-10.200.16.10:44554.service: Deactivated successfully. May 17 00:37:10.820128 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:37:10.820893 systemd-logind[1399]: Session 4 logged out. Waiting for processes to exit. May 17 00:37:10.821856 systemd-logind[1399]: Removed session 4. May 17 00:37:10.915370 systemd[1]: Started sshd@2-10.200.4.30:22-10.200.16.10:44558.service. May 17 00:37:11.506225 sshd[1662]: Accepted publickey for core from 10.200.16.10 port 44558 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:11.507895 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:11.512581 systemd[1]: Started session-5.scope. May 17 00:37:11.513020 systemd-logind[1399]: New session 5 of user core. May 17 00:37:11.933766 sshd[1662]: pam_unix(sshd:session): session closed for user core May 17 00:37:11.936441 systemd[1]: sshd@2-10.200.4.30:22-10.200.16.10:44558.service: Deactivated successfully. May 17 00:37:11.937476 systemd-logind[1399]: Session 5 logged out. Waiting for processes to exit. May 17 00:37:11.937558 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:37:11.938625 systemd-logind[1399]: Removed session 5. May 17 00:37:12.033582 systemd[1]: Started sshd@3-10.200.4.30:22-10.200.16.10:44574.service. May 17 00:37:12.627883 sshd[1668]: Accepted publickey for core from 10.200.16.10 port 44574 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:12.629290 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:12.634167 systemd[1]: Started session-6.scope. May 17 00:37:12.634754 systemd-logind[1399]: New session 6 of user core. May 17 00:37:13.060925 sshd[1668]: pam_unix(sshd:session): session closed for user core May 17 00:37:13.064245 systemd[1]: sshd@3-10.200.4.30:22-10.200.16.10:44574.service: Deactivated successfully. May 17 00:37:13.065060 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:37:13.065708 systemd-logind[1399]: Session 6 logged out. Waiting for processes to exit. May 17 00:37:13.066455 systemd-logind[1399]: Removed session 6. May 17 00:37:13.180653 systemd[1]: Started sshd@4-10.200.4.30:22-10.200.16.10:44576.service. May 17 00:37:13.775862 sshd[1674]: Accepted publickey for core from 10.200.16.10 port 44576 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:13.777598 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:13.782239 systemd[1]: Started session-7.scope. May 17 00:37:13.782828 systemd-logind[1399]: New session 7 of user core. May 17 00:37:14.441414 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:37:14.441773 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:37:14.474426 systemd[1]: Starting coreos-metadata.service... May 17 00:37:14.593876 coreos-metadata[1681]: May 17 00:37:14.593 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:37:14.596454 coreos-metadata[1681]: May 17 00:37:14.596 INFO Fetch successful May 17 00:37:14.596742 coreos-metadata[1681]: May 17 00:37:14.596 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 17 00:37:14.598684 coreos-metadata[1681]: May 17 00:37:14.598 INFO Fetch successful May 17 00:37:14.599096 coreos-metadata[1681]: May 17 00:37:14.599 INFO Fetching http://168.63.129.16/machine/143688ed-2e34-4488-9a49-c62fb358636d/64aa412f%2Da19b%2D4c9e%2Dbb84%2D671394ba870c.%5Fci%2D3510.3.7%2Dn%2D51492a5456?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 17 00:37:14.601044 coreos-metadata[1681]: May 17 00:37:14.600 INFO Fetch successful May 17 00:37:14.633961 coreos-metadata[1681]: May 17 00:37:14.633 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 17 00:37:14.643342 coreos-metadata[1681]: May 17 00:37:14.643 INFO Fetch successful May 17 00:37:14.652586 systemd[1]: Finished coreos-metadata.service. May 17 00:37:15.303404 systemd[1]: Stopped kubelet.service. May 17 00:37:15.306448 systemd[1]: Starting kubelet.service... May 17 00:37:15.344700 systemd[1]: Reloading. May 17 00:37:15.468708 /usr/lib/systemd/system-generators/torcx-generator[1736]: time="2025-05-17T00:37:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:37:15.468749 /usr/lib/systemd/system-generators/torcx-generator[1736]: time="2025-05-17T00:37:15Z" level=info msg="torcx already run" May 17 00:37:15.569656 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:37:15.569676 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:37:15.586451 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:37:15.684634 systemd[1]: Started kubelet.service. May 17 00:37:15.687535 systemd[1]: Stopping kubelet.service... May 17 00:37:15.688050 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:37:15.688293 systemd[1]: Stopped kubelet.service. May 17 00:37:15.690043 systemd[1]: Starting kubelet.service... May 17 00:37:16.673514 systemd[1]: Started kubelet.service. May 17 00:37:16.715323 kubelet[1806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:37:16.715323 kubelet[1806]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:37:16.715323 kubelet[1806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:37:16.715808 kubelet[1806]: I0517 00:37:16.715392 1806 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:37:17.250803 kubelet[1806]: I0517 00:37:17.250757 1806 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:37:17.250803 kubelet[1806]: I0517 00:37:17.250795 1806 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:37:17.251279 kubelet[1806]: I0517 00:37:17.251255 1806 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:37:17.345493 kubelet[1806]: I0517 00:37:17.345336 1806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:37:17.358813 kubelet[1806]: E0517 00:37:17.358767 1806 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:37:17.358813 kubelet[1806]: I0517 00:37:17.358811 1806 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:37:17.362946 kubelet[1806]: I0517 00:37:17.362919 1806 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:37:17.363240 kubelet[1806]: I0517 00:37:17.363211 1806 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:37:17.363423 kubelet[1806]: I0517 00:37:17.363238 1806 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.4.30","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:37:17.363568 kubelet[1806]: I0517 00:37:17.363432 1806 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:37:17.363568 kubelet[1806]: I0517 00:37:17.363444 1806 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:37:17.363655 kubelet[1806]: I0517 00:37:17.363599 1806 state_mem.go:36] "Initialized new in-memory state store" May 17 00:37:17.370564 kubelet[1806]: I0517 00:37:17.370529 1806 kubelet.go:480] "Attempting to sync node with API server" May 17 00:37:17.370564 kubelet[1806]: I0517 00:37:17.370562 1806 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:37:17.370743 kubelet[1806]: I0517 00:37:17.370592 1806 kubelet.go:386] "Adding apiserver pod source" May 17 00:37:17.370743 kubelet[1806]: I0517 00:37:17.370612 1806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:37:17.378944 kubelet[1806]: E0517 00:37:17.378912 1806 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:17.379126 kubelet[1806]: E0517 00:37:17.379108 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:17.382659 kubelet[1806]: I0517 00:37:17.382640 1806 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:37:17.383193 kubelet[1806]: I0517 00:37:17.383172 1806 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:37:17.383791 kubelet[1806]: W0517 00:37:17.383769 1806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:37:17.386381 kubelet[1806]: I0517 00:37:17.386360 1806 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:37:17.386462 kubelet[1806]: I0517 00:37:17.386414 1806 server.go:1289] "Started kubelet" May 17 00:37:17.392585 kubelet[1806]: I0517 00:37:17.392542 1806 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:37:17.393610 kubelet[1806]: I0517 00:37:17.393556 1806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:37:17.394052 kubelet[1806]: I0517 00:37:17.394033 1806 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:37:17.394186 kubelet[1806]: I0517 00:37:17.393642 1806 server.go:317] "Adding debug handlers to kubelet server" May 17 00:37:17.400204 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:37:17.404434 kubelet[1806]: I0517 00:37:17.404100 1806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:37:17.416169 kubelet[1806]: I0517 00:37:17.415913 1806 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:37:17.416345 kubelet[1806]: I0517 00:37:17.416226 1806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:37:17.419345 kubelet[1806]: I0517 00:37:17.419315 1806 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:37:17.419507 kubelet[1806]: I0517 00:37:17.419394 1806 reconciler.go:26] "Reconciler: start to sync state" May 17 00:37:17.419686 kubelet[1806]: E0517 00:37:17.419667 1806 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.30\" not found" May 17 00:37:17.422381 kubelet[1806]: I0517 00:37:17.422354 1806 factory.go:223] Registration of the systemd container factory successfully May 17 00:37:17.422669 kubelet[1806]: I0517 00:37:17.422644 1806 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:37:17.424609 kubelet[1806]: I0517 00:37:17.424573 1806 factory.go:223] Registration of the containerd container factory successfully May 17 00:37:17.426648 kubelet[1806]: E0517 00:37:17.426623 1806 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:37:17.445154 kubelet[1806]: E0517 00:37:17.445075 1806 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.4.30\" not found" node="10.200.4.30" May 17 00:37:17.452120 kubelet[1806]: I0517 00:37:17.452089 1806 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:37:17.452120 kubelet[1806]: I0517 00:37:17.452108 1806 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:37:17.452120 kubelet[1806]: I0517 00:37:17.452131 1806 state_mem.go:36] "Initialized new in-memory state store" May 17 00:37:17.457469 kubelet[1806]: I0517 00:37:17.457443 1806 policy_none.go:49] "None policy: Start" May 17 00:37:17.457469 kubelet[1806]: I0517 00:37:17.457469 1806 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:37:17.457680 kubelet[1806]: I0517 00:37:17.457483 1806 state_mem.go:35] "Initializing new in-memory state store" May 17 00:37:17.464848 systemd[1]: Created slice kubepods.slice. May 17 00:37:17.470282 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:37:17.475227 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:37:17.476973 kubelet[1806]: I0517 00:37:17.476946 1806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:37:17.478953 kubelet[1806]: I0517 00:37:17.478932 1806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:37:17.479076 kubelet[1806]: I0517 00:37:17.479068 1806 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:37:17.479155 kubelet[1806]: I0517 00:37:17.479137 1806 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:37:17.479218 kubelet[1806]: I0517 00:37:17.479211 1806 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:37:17.479329 kubelet[1806]: E0517 00:37:17.479294 1806 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:37:17.483971 kubelet[1806]: E0517 00:37:17.483943 1806 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:37:17.485700 kubelet[1806]: I0517 00:37:17.485382 1806 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:37:17.485700 kubelet[1806]: I0517 00:37:17.485405 1806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:37:17.485841 kubelet[1806]: I0517 00:37:17.485764 1806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:37:17.487624 kubelet[1806]: E0517 00:37:17.487566 1806 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:37:17.487624 kubelet[1806]: E0517 00:37:17.487607 1806 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.30\" not found" May 17 00:37:17.587134 kubelet[1806]: I0517 00:37:17.587002 1806 kubelet_node_status.go:75] "Attempting to register node" node="10.200.4.30" May 17 00:37:17.597036 kubelet[1806]: I0517 00:37:17.597007 1806 kubelet_node_status.go:78] "Successfully registered node" node="10.200.4.30" May 17 00:37:17.706622 kubelet[1806]: I0517 00:37:17.706440 1806 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 17 00:37:17.706856 env[1411]: time="2025-05-17T00:37:17.706808981Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:37:17.707297 kubelet[1806]: I0517 00:37:17.707091 1806 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 17 00:37:17.710156 sudo[1677]: pam_unix(sudo:session): session closed for user root May 17 00:37:17.823392 sshd[1674]: pam_unix(sshd:session): session closed for user core May 17 00:37:17.826628 systemd[1]: sshd@4-10.200.4.30:22-10.200.16.10:44576.service: Deactivated successfully. May 17 00:37:17.827711 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:37:17.828520 systemd-logind[1399]: Session 7 logged out. Waiting for processes to exit. May 17 00:37:17.829646 systemd-logind[1399]: Removed session 7. May 17 00:37:18.254741 kubelet[1806]: I0517 00:37:18.254696 1806 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 17 00:37:18.255466 kubelet[1806]: I0517 00:37:18.255427 1806 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 17 00:37:18.255466 kubelet[1806]: I0517 00:37:18.255488 1806 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 17 00:37:18.255466 kubelet[1806]: I0517 00:37:18.255521 1806 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 17 00:37:18.379136 kubelet[1806]: I0517 00:37:18.379078 1806 apiserver.go:52] "Watching apiserver" May 17 00:37:18.379381 kubelet[1806]: E0517 00:37:18.379355 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:18.396962 systemd[1]: Created slice kubepods-burstable-pod3e1b7f17_dbc2_4069_a940_712425198af5.slice. May 17 00:37:18.406014 systemd[1]: Created slice kubepods-besteffort-podf261a5fa_cc44_4315_bc78_53ce51a88afe.slice. May 17 00:37:18.420737 kubelet[1806]: I0517 00:37:18.420699 1806 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:37:18.426525 kubelet[1806]: I0517 00:37:18.426498 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e1b7f17-dbc2-4069-a940-712425198af5-clustermesh-secrets\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426668 kubelet[1806]: I0517 00:37:18.426530 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-net\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426668 kubelet[1806]: I0517 00:37:18.426553 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-hubble-tls\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426668 kubelet[1806]: I0517 00:37:18.426573 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkz5p\" (UniqueName: \"kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-kube-api-access-hkz5p\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426668 kubelet[1806]: I0517 00:37:18.426604 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cni-path\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426668 kubelet[1806]: I0517 00:37:18.426623 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-xtables-lock\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426668 kubelet[1806]: I0517 00:37:18.426643 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f261a5fa-cc44-4315-bc78-53ce51a88afe-lib-modules\") pod \"kube-proxy-4ctkc\" (UID: \"f261a5fa-cc44-4315-bc78-53ce51a88afe\") " pod="kube-system/kube-proxy-4ctkc" May 17 00:37:18.426920 kubelet[1806]: I0517 00:37:18.426661 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-bpf-maps\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426920 kubelet[1806]: I0517 00:37:18.426683 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-lib-modules\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426920 kubelet[1806]: I0517 00:37:18.426703 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-config-path\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.426920 kubelet[1806]: I0517 00:37:18.426731 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f261a5fa-cc44-4315-bc78-53ce51a88afe-kube-proxy\") pod \"kube-proxy-4ctkc\" (UID: \"f261a5fa-cc44-4315-bc78-53ce51a88afe\") " pod="kube-system/kube-proxy-4ctkc" May 17 00:37:18.426920 kubelet[1806]: I0517 00:37:18.426754 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f261a5fa-cc44-4315-bc78-53ce51a88afe-xtables-lock\") pod \"kube-proxy-4ctkc\" (UID: \"f261a5fa-cc44-4315-bc78-53ce51a88afe\") " pod="kube-system/kube-proxy-4ctkc" May 17 00:37:18.427106 kubelet[1806]: I0517 00:37:18.426776 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74kf\" (UniqueName: \"kubernetes.io/projected/f261a5fa-cc44-4315-bc78-53ce51a88afe-kube-api-access-l74kf\") pod \"kube-proxy-4ctkc\" (UID: \"f261a5fa-cc44-4315-bc78-53ce51a88afe\") " pod="kube-system/kube-proxy-4ctkc" May 17 00:37:18.427106 kubelet[1806]: I0517 00:37:18.426797 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-hostproc\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.427106 kubelet[1806]: I0517 00:37:18.426818 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-cgroup\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.427106 kubelet[1806]: I0517 00:37:18.426840 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-etc-cni-netd\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.427106 kubelet[1806]: I0517 00:37:18.426861 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-kernel\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.427106 kubelet[1806]: I0517 00:37:18.426890 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-run\") pod \"cilium-9lrhx\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " pod="kube-system/cilium-9lrhx" May 17 00:37:18.529705 kubelet[1806]: I0517 00:37:18.529588 1806 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:37:18.704691 env[1411]: time="2025-05-17T00:37:18.704637199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lrhx,Uid:3e1b7f17-dbc2-4069-a940-712425198af5,Namespace:kube-system,Attempt:0,}" May 17 00:37:18.715812 env[1411]: time="2025-05-17T00:37:18.715765881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ctkc,Uid:f261a5fa-cc44-4315-bc78-53ce51a88afe,Namespace:kube-system,Attempt:0,}" May 17 00:37:19.380113 kubelet[1806]: E0517 00:37:19.380056 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:19.894346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569508405.mount: Deactivated successfully. May 17 00:37:19.924322 env[1411]: time="2025-05-17T00:37:19.924260479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:19.928504 env[1411]: time="2025-05-17T00:37:19.928457908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:19.942413 env[1411]: time="2025-05-17T00:37:19.942368804Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:19.946633 env[1411]: time="2025-05-17T00:37:19.946592033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:19.949693 env[1411]: time="2025-05-17T00:37:19.949664455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:19.954193 env[1411]: time="2025-05-17T00:37:19.954162586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:19.957488 env[1411]: time="2025-05-17T00:37:19.957453308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:19.964020 env[1411]: time="2025-05-17T00:37:19.963981854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:20.028035 env[1411]: time="2025-05-17T00:37:20.027926485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:20.028035 env[1411]: time="2025-05-17T00:37:20.027954085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:20.028035 env[1411]: time="2025-05-17T00:37:20.027967585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:20.028384 env[1411]: time="2025-05-17T00:37:20.028075886Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700 pid=1866 runtime=io.containerd.runc.v2 May 17 00:37:20.028486 env[1411]: time="2025-05-17T00:37:20.027805884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:20.028486 env[1411]: time="2025-05-17T00:37:20.027853884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:20.028486 env[1411]: time="2025-05-17T00:37:20.027876484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:20.028750 env[1411]: time="2025-05-17T00:37:20.028680290Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56ce81a922768e9334155c0d1eec50b9e0133fbe611247fdb3b9037275f91b62 pid=1859 runtime=io.containerd.runc.v2 May 17 00:37:20.056754 systemd[1]: Started cri-containerd-56ce81a922768e9334155c0d1eec50b9e0133fbe611247fdb3b9037275f91b62.scope. May 17 00:37:20.058407 systemd[1]: Started cri-containerd-94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700.scope. May 17 00:37:20.096618 env[1411]: time="2025-05-17T00:37:20.096556430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lrhx,Uid:3e1b7f17-dbc2-4069-a940-712425198af5,Namespace:kube-system,Attempt:0,} returns sandbox id \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\"" May 17 00:37:20.099602 env[1411]: time="2025-05-17T00:37:20.099549350Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:37:20.100413 env[1411]: time="2025-05-17T00:37:20.100378455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ctkc,Uid:f261a5fa-cc44-4315-bc78-53ce51a88afe,Namespace:kube-system,Attempt:0,} returns sandbox id \"56ce81a922768e9334155c0d1eec50b9e0133fbe611247fdb3b9037275f91b62\"" May 17 00:37:20.380445 kubelet[1806]: E0517 00:37:20.380320 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:21.380854 kubelet[1806]: E0517 00:37:21.380818 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:22.381027 kubelet[1806]: E0517 00:37:22.380985 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:22.381523 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 17 00:37:23.381784 kubelet[1806]: E0517 00:37:23.381719 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:24.382482 kubelet[1806]: E0517 00:37:24.382412 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:25.382596 kubelet[1806]: E0517 00:37:25.382541 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:25.551929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390472995.mount: Deactivated successfully. May 17 00:37:26.382715 kubelet[1806]: E0517 00:37:26.382632 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:27.383027 kubelet[1806]: E0517 00:37:27.382954 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:28.267068 env[1411]: time="2025-05-17T00:37:28.266999318Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:28.275449 env[1411]: time="2025-05-17T00:37:28.275396551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:28.284633 env[1411]: time="2025-05-17T00:37:28.284597486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:28.285102 env[1411]: time="2025-05-17T00:37:28.285072788Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:37:28.287339 env[1411]: time="2025-05-17T00:37:28.287278897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:37:28.294734 env[1411]: time="2025-05-17T00:37:28.294703726Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:37:28.331253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258731209.mount: Deactivated successfully. May 17 00:37:28.337896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196842268.mount: Deactivated successfully. May 17 00:37:28.355439 env[1411]: time="2025-05-17T00:37:28.355365661Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\"" May 17 00:37:28.356328 env[1411]: time="2025-05-17T00:37:28.356290064Z" level=info msg="StartContainer for \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\"" May 17 00:37:28.378546 systemd[1]: Started cri-containerd-c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f.scope. May 17 00:37:28.384128 kubelet[1806]: E0517 00:37:28.384069 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:28.412275 env[1411]: time="2025-05-17T00:37:28.411203077Z" level=info msg="StartContainer for \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\" returns successfully" May 17 00:37:28.416704 systemd[1]: cri-containerd-c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f.scope: Deactivated successfully. May 17 00:37:29.328919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f-rootfs.mount: Deactivated successfully. May 17 00:37:29.385122 kubelet[1806]: E0517 00:37:29.385059 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:30.385626 kubelet[1806]: E0517 00:37:30.385568 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:30.605757 update_engine[1402]: I0517 00:37:30.605684 1402 update_attempter.cc:509] Updating boot flags... May 17 00:37:31.385875 kubelet[1806]: E0517 00:37:31.385817 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:32.104005 env[1411]: time="2025-05-17T00:37:32.103939998Z" level=info msg="shim disconnected" id=c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f May 17 00:37:32.104438 env[1411]: time="2025-05-17T00:37:32.104010098Z" level=warning msg="cleaning up after shim disconnected" id=c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f namespace=k8s.io May 17 00:37:32.104438 env[1411]: time="2025-05-17T00:37:32.104023398Z" level=info msg="cleaning up dead shim" May 17 00:37:32.126060 env[1411]: time="2025-05-17T00:37:32.126017864Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2005 runtime=io.containerd.runc.v2\n" May 17 00:37:32.387114 kubelet[1806]: E0517 00:37:32.386632 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:32.604745 env[1411]: time="2025-05-17T00:37:32.604486095Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:37:32.751344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount960966014.mount: Deactivated successfully. May 17 00:37:32.770957 env[1411]: time="2025-05-17T00:37:32.770914793Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\"" May 17 00:37:32.771784 env[1411]: time="2025-05-17T00:37:32.771753096Z" level=info msg="StartContainer for \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\"" May 17 00:37:32.811999 systemd[1]: Started cri-containerd-d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086.scope. May 17 00:37:32.862561 systemd[1]: cri-containerd-d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086.scope: Deactivated successfully. May 17 00:37:32.863047 env[1411]: time="2025-05-17T00:37:32.863001669Z" level=info msg="StartContainer for \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\" returns successfully" May 17 00:37:32.865600 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:37:32.865777 systemd[1]: Stopped systemd-sysctl.service. May 17 00:37:32.865949 systemd[1]: Stopping systemd-sysctl.service... May 17 00:37:32.868599 systemd[1]: Starting systemd-sysctl.service... May 17 00:37:32.880883 systemd[1]: Finished systemd-sysctl.service. May 17 00:37:32.963839 env[1411]: time="2025-05-17T00:37:32.963783270Z" level=info msg="shim disconnected" id=d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086 May 17 00:37:32.963839 env[1411]: time="2025-05-17T00:37:32.963840670Z" level=warning msg="cleaning up after shim disconnected" id=d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086 namespace=k8s.io May 17 00:37:32.964127 env[1411]: time="2025-05-17T00:37:32.963850570Z" level=info msg="cleaning up dead shim" May 17 00:37:32.987048 env[1411]: time="2025-05-17T00:37:32.986997740Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2091 runtime=io.containerd.runc.v2\n" May 17 00:37:33.387789 kubelet[1806]: E0517 00:37:33.387729 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:33.533462 env[1411]: time="2025-05-17T00:37:33.533405475Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:37:33.575903 env[1411]: time="2025-05-17T00:37:33.575836594Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\"" May 17 00:37:33.576861 env[1411]: time="2025-05-17T00:37:33.576821797Z" level=info msg="StartContainer for \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\"" May 17 00:37:33.616753 systemd[1]: Started cri-containerd-54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107.scope. May 17 00:37:33.658770 systemd[1]: cri-containerd-54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107.scope: Deactivated successfully. May 17 00:37:33.662282 env[1411]: time="2025-05-17T00:37:33.662244836Z" level=info msg="StartContainer for \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\" returns successfully" May 17 00:37:33.729466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086-rootfs.mount: Deactivated successfully. May 17 00:37:33.729592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558743935.mount: Deactivated successfully. May 17 00:37:34.212356 env[1411]: time="2025-05-17T00:37:34.212296243Z" level=info msg="shim disconnected" id=54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107 May 17 00:37:34.212356 env[1411]: time="2025-05-17T00:37:34.212355143Z" level=warning msg="cleaning up after shim disconnected" id=54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107 namespace=k8s.io May 17 00:37:34.212646 env[1411]: time="2025-05-17T00:37:34.212368343Z" level=info msg="cleaning up dead shim" May 17 00:37:34.221970 env[1411]: time="2025-05-17T00:37:34.221905468Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2150 runtime=io.containerd.runc.v2\n" May 17 00:37:34.388283 kubelet[1806]: E0517 00:37:34.388221 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:34.408078 env[1411]: time="2025-05-17T00:37:34.408021457Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:34.418063 env[1411]: time="2025-05-17T00:37:34.417999984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:34.422531 env[1411]: time="2025-05-17T00:37:34.422479396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:34.429187 env[1411]: time="2025-05-17T00:37:34.429116413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:34.429558 env[1411]: time="2025-05-17T00:37:34.429521514Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 17 00:37:34.438569 env[1411]: time="2025-05-17T00:37:34.438525438Z" level=info msg="CreateContainer within sandbox \"56ce81a922768e9334155c0d1eec50b9e0133fbe611247fdb3b9037275f91b62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:37:34.476241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081547392.mount: Deactivated successfully. May 17 00:37:34.492121 env[1411]: time="2025-05-17T00:37:34.492067978Z" level=info msg="CreateContainer within sandbox \"56ce81a922768e9334155c0d1eec50b9e0133fbe611247fdb3b9037275f91b62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16de08b75cbcc414cf608427ca0a7886d72b8a1b97184aee41275aeedbead2ce\"" May 17 00:37:34.492834 env[1411]: time="2025-05-17T00:37:34.492797780Z" level=info msg="StartContainer for \"16de08b75cbcc414cf608427ca0a7886d72b8a1b97184aee41275aeedbead2ce\"" May 17 00:37:34.510926 systemd[1]: Started cri-containerd-16de08b75cbcc414cf608427ca0a7886d72b8a1b97184aee41275aeedbead2ce.scope. May 17 00:37:34.532671 env[1411]: time="2025-05-17T00:37:34.532625085Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:37:34.561806 env[1411]: time="2025-05-17T00:37:34.561755862Z" level=info msg="StartContainer for \"16de08b75cbcc414cf608427ca0a7886d72b8a1b97184aee41275aeedbead2ce\" returns successfully" May 17 00:37:34.588655 env[1411]: time="2025-05-17T00:37:34.588598332Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\"" May 17 00:37:34.590206 env[1411]: time="2025-05-17T00:37:34.590177336Z" level=info msg="StartContainer for \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\"" May 17 00:37:34.614628 systemd[1]: Started cri-containerd-5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0.scope. May 17 00:37:34.652590 systemd[1]: cri-containerd-5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0.scope: Deactivated successfully. May 17 00:37:34.656522 env[1411]: time="2025-05-17T00:37:34.656123410Z" level=info msg="StartContainer for \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\" returns successfully" May 17 00:37:34.857203 env[1411]: time="2025-05-17T00:37:34.857038938Z" level=info msg="shim disconnected" id=5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0 May 17 00:37:34.857203 env[1411]: time="2025-05-17T00:37:34.857171738Z" level=warning msg="cleaning up after shim disconnected" id=5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0 namespace=k8s.io May 17 00:37:34.857203 env[1411]: time="2025-05-17T00:37:34.857193139Z" level=info msg="cleaning up dead shim" May 17 00:37:34.866443 env[1411]: time="2025-05-17T00:37:34.866403763Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2280 runtime=io.containerd.runc.v2\n" May 17 00:37:35.389246 kubelet[1806]: E0517 00:37:35.389191 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:35.544301 env[1411]: time="2025-05-17T00:37:35.544258856Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:37:35.550731 kubelet[1806]: I0517 00:37:35.550671 1806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4ctkc" podStartSLOduration=4.221425216 podStartE2EDuration="18.550652772s" podCreationTimestamp="2025-05-17 00:37:17 +0000 UTC" firstStartedPulling="2025-05-17 00:37:20.101353561 +0000 UTC m=+3.422915951" lastFinishedPulling="2025-05-17 00:37:34.430581117 +0000 UTC m=+17.752143507" observedRunningTime="2025-05-17 00:37:35.550550771 +0000 UTC m=+18.872113061" watchObservedRunningTime="2025-05-17 00:37:35.550652772 +0000 UTC m=+18.872215062" May 17 00:37:35.571008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540318915.mount: Deactivated successfully. May 17 00:37:35.579315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297476926.mount: Deactivated successfully. May 17 00:37:35.593406 env[1411]: time="2025-05-17T00:37:35.593362677Z" level=info msg="CreateContainer within sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\"" May 17 00:37:35.594029 env[1411]: time="2025-05-17T00:37:35.593999578Z" level=info msg="StartContainer for \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\"" May 17 00:37:35.610498 systemd[1]: Started cri-containerd-60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712.scope. May 17 00:37:35.643796 env[1411]: time="2025-05-17T00:37:35.643756101Z" level=info msg="StartContainer for \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\" returns successfully" May 17 00:37:35.725297 kubelet[1806]: I0517 00:37:35.725264 1806 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:37:36.198173 kernel: Initializing XFRM netlink socket May 17 00:37:36.390323 kubelet[1806]: E0517 00:37:36.390264 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:36.560960 kubelet[1806]: I0517 00:37:36.560565 1806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9lrhx" podStartSLOduration=11.373121728 podStartE2EDuration="19.560539276s" podCreationTimestamp="2025-05-17 00:37:17 +0000 UTC" firstStartedPulling="2025-05-17 00:37:20.098782545 +0000 UTC m=+3.420344935" lastFinishedPulling="2025-05-17 00:37:28.286200093 +0000 UTC m=+11.607762483" observedRunningTime="2025-05-17 00:37:36.560500976 +0000 UTC m=+19.882063366" watchObservedRunningTime="2025-05-17 00:37:36.560539276 +0000 UTC m=+19.882101566" May 17 00:37:37.371372 kubelet[1806]: E0517 00:37:37.371281 1806 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:37.390425 kubelet[1806]: E0517 00:37:37.390379 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:37.846323 systemd-networkd[1556]: cilium_host: Link UP May 17 00:37:37.846494 systemd-networkd[1556]: cilium_net: Link UP May 17 00:37:37.851605 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:37:37.851692 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:37:37.848041 systemd-networkd[1556]: cilium_net: Gained carrier May 17 00:37:37.852849 systemd-networkd[1556]: cilium_host: Gained carrier May 17 00:37:38.066623 systemd-networkd[1556]: cilium_vxlan: Link UP May 17 00:37:38.066633 systemd-networkd[1556]: cilium_vxlan: Gained carrier May 17 00:37:38.320309 systemd-networkd[1556]: cilium_net: Gained IPv6LL May 17 00:37:38.361224 kernel: NET: Registered PF_ALG protocol family May 17 00:37:38.391221 kubelet[1806]: E0517 00:37:38.391132 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:38.456294 systemd-networkd[1556]: cilium_host: Gained IPv6LL May 17 00:37:38.682184 systemd[1]: Created slice kubepods-besteffort-pod68e792c6_ba5e_47ba_89ee_30b689e1d8db.slice. May 17 00:37:38.774305 kubelet[1806]: I0517 00:37:38.774183 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vgh\" (UniqueName: \"kubernetes.io/projected/68e792c6-ba5e-47ba-89ee-30b689e1d8db-kube-api-access-w7vgh\") pod \"nginx-deployment-7fcdb87857-6xccx\" (UID: \"68e792c6-ba5e-47ba-89ee-30b689e1d8db\") " pod="default/nginx-deployment-7fcdb87857-6xccx" May 17 00:37:38.986619 env[1411]: time="2025-05-17T00:37:38.986502062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-6xccx,Uid:68e792c6-ba5e-47ba-89ee-30b689e1d8db,Namespace:default,Attempt:0,}" May 17 00:37:39.191616 systemd-networkd[1556]: lxc_health: Link UP May 17 00:37:39.200598 systemd-networkd[1556]: lxc_health: Gained carrier May 17 00:37:39.201244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:37:39.392089 kubelet[1806]: E0517 00:37:39.392049 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:39.558653 systemd-networkd[1556]: lxc20c1a99d15d3: Link UP May 17 00:37:39.568176 kernel: eth0: renamed from tmp650bc May 17 00:37:39.577255 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc20c1a99d15d3: link becomes ready May 17 00:37:39.580902 systemd-networkd[1556]: lxc20c1a99d15d3: Gained carrier May 17 00:37:39.928327 systemd-networkd[1556]: cilium_vxlan: Gained IPv6LL May 17 00:37:40.330399 kubelet[1806]: I0517 00:37:40.330274 1806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:37:40.392414 kubelet[1806]: E0517 00:37:40.392359 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:41.144347 systemd-networkd[1556]: lxc_health: Gained IPv6LL May 17 00:37:41.393047 kubelet[1806]: E0517 00:37:41.393001 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:41.592321 systemd-networkd[1556]: lxc20c1a99d15d3: Gained IPv6LL May 17 00:37:42.394201 kubelet[1806]: E0517 00:37:42.394133 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:43.040624 env[1411]: time="2025-05-17T00:37:43.040542355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:43.041102 env[1411]: time="2025-05-17T00:37:43.040589256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:43.041102 env[1411]: time="2025-05-17T00:37:43.040625057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:43.041102 env[1411]: time="2025-05-17T00:37:43.040887962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/650bc1e873dec7f8b62c8ab7fb2a9008584f73a9049ded4576e1d1634d3358b8 pid=2903 runtime=io.containerd.runc.v2 May 17 00:37:43.062257 systemd[1]: Started cri-containerd-650bc1e873dec7f8b62c8ab7fb2a9008584f73a9049ded4576e1d1634d3358b8.scope. May 17 00:37:43.102342 env[1411]: time="2025-05-17T00:37:43.102292923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-6xccx,Uid:68e792c6-ba5e-47ba-89ee-30b689e1d8db,Namespace:default,Attempt:0,} returns sandbox id \"650bc1e873dec7f8b62c8ab7fb2a9008584f73a9049ded4576e1d1634d3358b8\"" May 17 00:37:43.103735 env[1411]: time="2025-05-17T00:37:43.103706450Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:37:43.395856 kubelet[1806]: E0517 00:37:43.395819 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:44.396055 kubelet[1806]: E0517 00:37:44.396010 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:45.396444 kubelet[1806]: E0517 00:37:45.396377 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:46.191247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436715589.mount: Deactivated successfully. May 17 00:37:46.397376 kubelet[1806]: E0517 00:37:46.397302 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:47.397684 kubelet[1806]: E0517 00:37:47.397633 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:47.763301 env[1411]: time="2025-05-17T00:37:47.762885378Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:47.768867 env[1411]: time="2025-05-17T00:37:47.768826379Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:47.774549 env[1411]: time="2025-05-17T00:37:47.774515075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:47.779043 env[1411]: time="2025-05-17T00:37:47.779007751Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:47.779708 env[1411]: time="2025-05-17T00:37:47.779674962Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:37:47.786592 env[1411]: time="2025-05-17T00:37:47.786558879Z" level=info msg="CreateContainer within sandbox \"650bc1e873dec7f8b62c8ab7fb2a9008584f73a9049ded4576e1d1634d3358b8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 17 00:37:47.815055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943968902.mount: Deactivated successfully. May 17 00:37:47.822728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770333338.mount: Deactivated successfully. May 17 00:37:47.837589 env[1411]: time="2025-05-17T00:37:47.837537841Z" level=info msg="CreateContainer within sandbox \"650bc1e873dec7f8b62c8ab7fb2a9008584f73a9049ded4576e1d1634d3358b8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7e9c2dac8e04f89056e5894166629f8d8aa93511593ab6a99d50fb962acc9eea\"" May 17 00:37:47.838193 env[1411]: time="2025-05-17T00:37:47.838159252Z" level=info msg="StartContainer for \"7e9c2dac8e04f89056e5894166629f8d8aa93511593ab6a99d50fb962acc9eea\"" May 17 00:37:47.857285 systemd[1]: Started cri-containerd-7e9c2dac8e04f89056e5894166629f8d8aa93511593ab6a99d50fb962acc9eea.scope. May 17 00:37:47.890996 env[1411]: time="2025-05-17T00:37:47.890523137Z" level=info msg="StartContainer for \"7e9c2dac8e04f89056e5894166629f8d8aa93511593ab6a99d50fb962acc9eea\" returns successfully" May 17 00:37:48.398468 kubelet[1806]: E0517 00:37:48.398393 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:48.575012 kubelet[1806]: I0517 00:37:48.574950 1806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-6xccx" podStartSLOduration=5.897129113 podStartE2EDuration="10.574933658s" podCreationTimestamp="2025-05-17 00:37:38 +0000 UTC" firstStartedPulling="2025-05-17 00:37:43.103132339 +0000 UTC m=+26.424694729" lastFinishedPulling="2025-05-17 00:37:47.780936984 +0000 UTC m=+31.102499274" observedRunningTime="2025-05-17 00:37:48.574825256 +0000 UTC m=+31.896387646" watchObservedRunningTime="2025-05-17 00:37:48.574933658 +0000 UTC m=+31.896495948" May 17 00:37:49.398904 kubelet[1806]: E0517 00:37:49.398829 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:50.400085 kubelet[1806]: E0517 00:37:50.400025 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:51.401122 kubelet[1806]: E0517 00:37:51.401070 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:52.402034 kubelet[1806]: E0517 00:37:52.401977 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:53.403158 kubelet[1806]: E0517 00:37:53.403084 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:54.404157 kubelet[1806]: E0517 00:37:54.404097 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:55.404814 kubelet[1806]: E0517 00:37:55.404762 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:55.449894 systemd[1]: Created slice kubepods-besteffort-pode53245c4_aece_430e_96f2_7181d26a820f.slice. May 17 00:37:55.575478 kubelet[1806]: I0517 00:37:55.575333 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e53245c4-aece-430e-96f2-7181d26a820f-data\") pod \"nfs-server-provisioner-0\" (UID: \"e53245c4-aece-430e-96f2-7181d26a820f\") " pod="default/nfs-server-provisioner-0" May 17 00:37:55.575478 kubelet[1806]: I0517 00:37:55.575408 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9grq\" (UniqueName: \"kubernetes.io/projected/e53245c4-aece-430e-96f2-7181d26a820f-kube-api-access-q9grq\") pod \"nfs-server-provisioner-0\" (UID: \"e53245c4-aece-430e-96f2-7181d26a820f\") " pod="default/nfs-server-provisioner-0" May 17 00:37:55.753318 env[1411]: time="2025-05-17T00:37:55.752861341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e53245c4-aece-430e-96f2-7181d26a820f,Namespace:default,Attempt:0,}" May 17 00:37:55.821514 systemd-networkd[1556]: lxc0d04f35b24e5: Link UP May 17 00:37:55.834001 kernel: eth0: renamed from tmpdb561 May 17 00:37:55.843410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:37:55.843519 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0d04f35b24e5: link becomes ready May 17 00:37:55.843686 systemd-networkd[1556]: lxc0d04f35b24e5: Gained carrier May 17 00:37:56.030104 env[1411]: time="2025-05-17T00:37:56.029852599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:56.030104 env[1411]: time="2025-05-17T00:37:56.029888699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:56.030104 env[1411]: time="2025-05-17T00:37:56.029899000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:56.030561 env[1411]: time="2025-05-17T00:37:56.030047602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db561e4db750b650aa32558c39f47e6b49a1c7b4599ec15a7d8e64acd809f96a pid=3026 runtime=io.containerd.runc.v2 May 17 00:37:56.052824 systemd[1]: Started cri-containerd-db561e4db750b650aa32558c39f47e6b49a1c7b4599ec15a7d8e64acd809f96a.scope. May 17 00:37:56.093431 env[1411]: time="2025-05-17T00:37:56.093377741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e53245c4-aece-430e-96f2-7181d26a820f,Namespace:default,Attempt:0,} returns sandbox id \"db561e4db750b650aa32558c39f47e6b49a1c7b4599ec15a7d8e64acd809f96a\"" May 17 00:37:56.095258 env[1411]: time="2025-05-17T00:37:56.095199765Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 17 00:37:56.405067 kubelet[1806]: E0517 00:37:56.405005 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:56.686930 systemd[1]: run-containerd-runc-k8s.io-db561e4db750b650aa32558c39f47e6b49a1c7b4599ec15a7d8e64acd809f96a-runc.9KrcRy.mount: Deactivated successfully. May 17 00:37:57.371555 kubelet[1806]: E0517 00:37:57.371502 1806 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:57.405527 kubelet[1806]: E0517 00:37:57.405467 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:57.660345 systemd-networkd[1556]: lxc0d04f35b24e5: Gained IPv6LL May 17 00:37:58.405906 kubelet[1806]: E0517 00:37:58.405839 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:37:58.944702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948116518.mount: Deactivated successfully. May 17 00:37:59.406327 kubelet[1806]: E0517 00:37:59.406271 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:00.406802 kubelet[1806]: E0517 00:38:00.406745 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:00.997247 env[1411]: time="2025-05-17T00:38:00.997183456Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:01.003090 env[1411]: time="2025-05-17T00:38:01.003040625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:01.009125 env[1411]: time="2025-05-17T00:38:01.009076195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:01.013434 env[1411]: time="2025-05-17T00:38:01.013391345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:01.013997 env[1411]: time="2025-05-17T00:38:01.013963752Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 17 00:38:01.020849 env[1411]: time="2025-05-17T00:38:01.020807631Z" level=info msg="CreateContainer within sandbox \"db561e4db750b650aa32558c39f47e6b49a1c7b4599ec15a7d8e64acd809f96a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 17 00:38:01.051929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736332190.mount: Deactivated successfully. May 17 00:38:01.069452 env[1411]: time="2025-05-17T00:38:01.069396396Z" level=info msg="CreateContainer within sandbox \"db561e4db750b650aa32558c39f47e6b49a1c7b4599ec15a7d8e64acd809f96a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"efb949f962a45866a2305a5974620b9060af7c512910b500c66ac9a40a835c65\"" May 17 00:38:01.070213 env[1411]: time="2025-05-17T00:38:01.070171605Z" level=info msg="StartContainer for \"efb949f962a45866a2305a5974620b9060af7c512910b500c66ac9a40a835c65\"" May 17 00:38:01.090897 systemd[1]: Started cri-containerd-efb949f962a45866a2305a5974620b9060af7c512910b500c66ac9a40a835c65.scope. May 17 00:38:01.127402 env[1411]: time="2025-05-17T00:38:01.127346770Z" level=info msg="StartContainer for \"efb949f962a45866a2305a5974620b9060af7c512910b500c66ac9a40a835c65\" returns successfully" May 17 00:38:01.407688 kubelet[1806]: E0517 00:38:01.407630 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:01.606094 kubelet[1806]: I0517 00:38:01.605895 1806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.685265421 podStartE2EDuration="6.605876431s" podCreationTimestamp="2025-05-17 00:37:55 +0000 UTC" firstStartedPulling="2025-05-17 00:37:56.094621757 +0000 UTC m=+39.416184047" lastFinishedPulling="2025-05-17 00:38:01.015232667 +0000 UTC m=+44.336795057" observedRunningTime="2025-05-17 00:38:01.605493226 +0000 UTC m=+44.927055616" watchObservedRunningTime="2025-05-17 00:38:01.605876431 +0000 UTC m=+44.927438821" May 17 00:38:02.408454 kubelet[1806]: E0517 00:38:02.408298 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:03.408918 kubelet[1806]: E0517 00:38:03.408863 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:04.409074 kubelet[1806]: E0517 00:38:04.409009 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:05.409875 kubelet[1806]: E0517 00:38:05.409815 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:06.376099 systemd[1]: Created slice kubepods-besteffort-poda4996f18_0af7_4de6_bf65_f985d383e409.slice. May 17 00:38:06.410186 kubelet[1806]: E0517 00:38:06.410125 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:06.437538 kubelet[1806]: I0517 00:38:06.437498 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxfm\" (UniqueName: \"kubernetes.io/projected/a4996f18-0af7-4de6-bf65-f985d383e409-kube-api-access-7gxfm\") pod \"test-pod-1\" (UID: \"a4996f18-0af7-4de6-bf65-f985d383e409\") " pod="default/test-pod-1" May 17 00:38:06.437732 kubelet[1806]: I0517 00:38:06.437558 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f9748216-6783-45b6-be59-405dadbebd80\" (UniqueName: \"kubernetes.io/nfs/a4996f18-0af7-4de6-bf65-f985d383e409-pvc-f9748216-6783-45b6-be59-405dadbebd80\") pod \"test-pod-1\" (UID: \"a4996f18-0af7-4de6-bf65-f985d383e409\") " pod="default/test-pod-1" May 17 00:38:06.743172 kernel: FS-Cache: Loaded May 17 00:38:06.893567 kernel: RPC: Registered named UNIX socket transport module. May 17 00:38:06.893711 kernel: RPC: Registered udp transport module. May 17 00:38:06.893737 kernel: RPC: Registered tcp transport module. May 17 00:38:06.898276 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 17 00:38:07.135170 kernel: FS-Cache: Netfs 'nfs' registered for caching May 17 00:38:07.411283 kubelet[1806]: E0517 00:38:07.411203 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:07.414123 kernel: NFS: Registering the id_resolver key type May 17 00:38:07.414206 kernel: Key type id_resolver registered May 17 00:38:07.414231 kernel: Key type id_legacy registered May 17 00:38:07.838555 nfsidmap[3139]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-n-51492a5456' May 17 00:38:07.859502 nfsidmap[3140]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-n-51492a5456' May 17 00:38:07.879094 env[1411]: time="2025-05-17T00:38:07.879048968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a4996f18-0af7-4de6-bf65-f985d383e409,Namespace:default,Attempt:0,}" May 17 00:38:07.955738 systemd-networkd[1556]: lxc82d62b98c3c2: Link UP May 17 00:38:07.967191 kernel: eth0: renamed from tmp2c881 May 17 00:38:07.975496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:38:07.975586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc82d62b98c3c2: link becomes ready May 17 00:38:07.975735 systemd-networkd[1556]: lxc82d62b98c3c2: Gained carrier May 17 00:38:08.150382 env[1411]: time="2025-05-17T00:38:08.150274241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:08.150639 env[1411]: time="2025-05-17T00:38:08.150584544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:08.150639 env[1411]: time="2025-05-17T00:38:08.150607045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:08.150961 env[1411]: time="2025-05-17T00:38:08.150914648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c881ca22f5600a0664afcfd7cde08c776f1ed59e18724fa6b722ed328cc251a pid=3166 runtime=io.containerd.runc.v2 May 17 00:38:08.166791 systemd[1]: Started cri-containerd-2c881ca22f5600a0664afcfd7cde08c776f1ed59e18724fa6b722ed328cc251a.scope. May 17 00:38:08.209511 env[1411]: time="2025-05-17T00:38:08.209457618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a4996f18-0af7-4de6-bf65-f985d383e409,Namespace:default,Attempt:0,} returns sandbox id \"2c881ca22f5600a0664afcfd7cde08c776f1ed59e18724fa6b722ed328cc251a\"" May 17 00:38:08.210672 env[1411]: time="2025-05-17T00:38:08.210627629Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:38:08.412213 kubelet[1806]: E0517 00:38:08.412080 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:08.565853 env[1411]: time="2025-05-17T00:38:08.565788687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:08.579823 env[1411]: time="2025-05-17T00:38:08.579778723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:08.586978 env[1411]: time="2025-05-17T00:38:08.586941593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:08.592732 env[1411]: time="2025-05-17T00:38:08.592695749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:08.593413 env[1411]: time="2025-05-17T00:38:08.593382856Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:38:08.600962 env[1411]: time="2025-05-17T00:38:08.600927229Z" level=info msg="CreateContainer within sandbox \"2c881ca22f5600a0664afcfd7cde08c776f1ed59e18724fa6b722ed328cc251a\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 17 00:38:08.641378 env[1411]: time="2025-05-17T00:38:08.641322722Z" level=info msg="CreateContainer within sandbox \"2c881ca22f5600a0664afcfd7cde08c776f1ed59e18724fa6b722ed328cc251a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"474cf9de31f5fae416f6cb135918cdfdc128183c4f16372db3ded0fc0e5da848\"" May 17 00:38:08.642020 env[1411]: time="2025-05-17T00:38:08.641872428Z" level=info msg="StartContainer for \"474cf9de31f5fae416f6cb135918cdfdc128183c4f16372db3ded0fc0e5da848\"" May 17 00:38:08.659081 systemd[1]: Started cri-containerd-474cf9de31f5fae416f6cb135918cdfdc128183c4f16372db3ded0fc0e5da848.scope. May 17 00:38:08.691431 env[1411]: time="2025-05-17T00:38:08.691384610Z" level=info msg="StartContainer for \"474cf9de31f5fae416f6cb135918cdfdc128183c4f16372db3ded0fc0e5da848\" returns successfully" May 17 00:38:09.176487 systemd-networkd[1556]: lxc82d62b98c3c2: Gained IPv6LL May 17 00:38:09.413305 kubelet[1806]: E0517 00:38:09.413204 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:09.616658 kubelet[1806]: I0517 00:38:09.616524 1806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=13.232291331 podStartE2EDuration="13.616506772s" podCreationTimestamp="2025-05-17 00:37:56 +0000 UTC" firstStartedPulling="2025-05-17 00:38:08.210336226 +0000 UTC m=+51.531898516" lastFinishedPulling="2025-05-17 00:38:08.594551667 +0000 UTC m=+51.916113957" observedRunningTime="2025-05-17 00:38:09.616246769 +0000 UTC m=+52.937809059" watchObservedRunningTime="2025-05-17 00:38:09.616506772 +0000 UTC m=+52.938069062" May 17 00:38:10.413392 kubelet[1806]: E0517 00:38:10.413339 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:11.414316 kubelet[1806]: E0517 00:38:11.414248 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:12.414845 kubelet[1806]: E0517 00:38:12.414752 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:12.689570 env[1411]: time="2025-05-17T00:38:12.689448732Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:38:12.694584 env[1411]: time="2025-05-17T00:38:12.694545977Z" level=info msg="StopContainer for \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\" with timeout 2 (s)" May 17 00:38:12.694803 env[1411]: time="2025-05-17T00:38:12.694773779Z" level=info msg="Stop container \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\" with signal terminated" May 17 00:38:12.701918 systemd-networkd[1556]: lxc_health: Link DOWN May 17 00:38:12.701926 systemd-networkd[1556]: lxc_health: Lost carrier May 17 00:38:12.723481 systemd[1]: cri-containerd-60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712.scope: Deactivated successfully. May 17 00:38:12.723805 systemd[1]: cri-containerd-60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712.scope: Consumed 6.984s CPU time. May 17 00:38:12.749766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712-rootfs.mount: Deactivated successfully. May 17 00:38:13.415269 kubelet[1806]: E0517 00:38:13.415214 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:14.416017 kubelet[1806]: E0517 00:38:14.415951 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:14.703922 env[1411]: time="2025-05-17T00:38:14.703554227Z" level=info msg="Kill container \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\"" May 17 00:38:15.416825 kubelet[1806]: E0517 00:38:15.416749 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:16.350915 env[1411]: time="2025-05-17T00:38:16.350826072Z" level=info msg="shim disconnected" id=60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712 May 17 00:38:16.350915 env[1411]: time="2025-05-17T00:38:16.350919873Z" level=warning msg="cleaning up after shim disconnected" id=60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712 namespace=k8s.io May 17 00:38:16.351553 env[1411]: time="2025-05-17T00:38:16.350935073Z" level=info msg="cleaning up dead shim" May 17 00:38:16.360733 env[1411]: time="2025-05-17T00:38:16.360687752Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3299 runtime=io.containerd.runc.v2\n" May 17 00:38:16.369363 env[1411]: time="2025-05-17T00:38:16.368542315Z" level=info msg="StopContainer for \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\" returns successfully" May 17 00:38:16.370015 env[1411]: time="2025-05-17T00:38:16.369981526Z" level=info msg="StopPodSandbox for \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\"" May 17 00:38:16.370282 systemd[1]: Created slice kubepods-besteffort-pod248de5ae_5b22_4632_a7b6_5b17a1431892.slice. May 17 00:38:16.372367 env[1411]: time="2025-05-17T00:38:16.372329945Z" level=info msg="Container to stop \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:16.372494 env[1411]: time="2025-05-17T00:38:16.372469846Z" level=info msg="Container to stop \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:16.372597 env[1411]: time="2025-05-17T00:38:16.372565747Z" level=info msg="Container to stop \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:16.372597 env[1411]: time="2025-05-17T00:38:16.372591147Z" level=info msg="Container to stop \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:16.376392 env[1411]: time="2025-05-17T00:38:16.372607147Z" level=info msg="Container to stop \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:16.377164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700-shm.mount: Deactivated successfully. May 17 00:38:16.381399 systemd[1]: cri-containerd-94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700.scope: Deactivated successfully. May 17 00:38:16.402990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700-rootfs.mount: Deactivated successfully. May 17 00:38:16.417511 env[1411]: time="2025-05-17T00:38:16.417451008Z" level=info msg="shim disconnected" id=94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700 May 17 00:38:16.417680 env[1411]: time="2025-05-17T00:38:16.417637809Z" level=warning msg="cleaning up after shim disconnected" id=94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700 namespace=k8s.io May 17 00:38:16.417680 env[1411]: time="2025-05-17T00:38:16.417658609Z" level=info msg="cleaning up dead shim" May 17 00:38:16.417782 kubelet[1806]: E0517 00:38:16.417526 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:16.425436 env[1411]: time="2025-05-17T00:38:16.425396972Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3330 runtime=io.containerd.runc.v2\n" May 17 00:38:16.425751 env[1411]: time="2025-05-17T00:38:16.425718674Z" level=info msg="TearDown network for sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" successfully" May 17 00:38:16.425841 env[1411]: time="2025-05-17T00:38:16.425750475Z" level=info msg="StopPodSandbox for \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" returns successfully" May 17 00:38:16.472511 systemd[1]: Created slice kubepods-burstable-podeabbfea2_39f6_4075_8f9c_91e8de2912a9.slice. May 17 00:38:16.501013 kubelet[1806]: I0517 00:38:16.500982 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-bpf-maps\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501177 kubelet[1806]: I0517 00:38:16.501095 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-lib-modules\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501177 kubelet[1806]: I0517 00:38:16.501121 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-cgroup\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501177 kubelet[1806]: I0517 00:38:16.501171 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-hubble-tls\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501329 kubelet[1806]: I0517 00:38:16.501194 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-net\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501329 kubelet[1806]: I0517 00:38:16.501216 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-etc-cni-netd\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501329 kubelet[1806]: I0517 00:38:16.501251 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-run\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501329 kubelet[1806]: I0517 00:38:16.501271 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cni-path\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501329 kubelet[1806]: I0517 00:38:16.501296 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-config-path\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501530 kubelet[1806]: I0517 00:38:16.501331 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-hostproc\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501530 kubelet[1806]: I0517 00:38:16.501353 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-kernel\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501530 kubelet[1806]: I0517 00:38:16.501382 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e1b7f17-dbc2-4069-a940-712425198af5-clustermesh-secrets\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501530 kubelet[1806]: I0517 00:38:16.501420 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkz5p\" (UniqueName: \"kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-kube-api-access-hkz5p\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501530 kubelet[1806]: I0517 00:38:16.501442 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-xtables-lock\") pod \"3e1b7f17-dbc2-4069-a940-712425198af5\" (UID: \"3e1b7f17-dbc2-4069-a940-712425198af5\") " May 17 00:38:16.501530 kubelet[1806]: I0517 00:38:16.501502 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/248de5ae-5b22-4632-a7b6-5b17a1431892-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wk62w\" (UID: \"248de5ae-5b22-4632-a7b6-5b17a1431892\") " pod="kube-system/cilium-operator-6c4d7847fc-wk62w" May 17 00:38:16.501776 kubelet[1806]: I0517 00:38:16.501535 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nspbp\" (UniqueName: \"kubernetes.io/projected/248de5ae-5b22-4632-a7b6-5b17a1431892-kube-api-access-nspbp\") pod \"cilium-operator-6c4d7847fc-wk62w\" (UID: \"248de5ae-5b22-4632-a7b6-5b17a1431892\") " pod="kube-system/cilium-operator-6c4d7847fc-wk62w" May 17 00:38:16.501776 kubelet[1806]: I0517 00:38:16.501045 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.501776 kubelet[1806]: I0517 00:38:16.501621 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.501776 kubelet[1806]: I0517 00:38:16.501658 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.502003 kubelet[1806]: I0517 00:38:16.501979 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.502128 kubelet[1806]: I0517 00:38:16.502111 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.502255 kubelet[1806]: I0517 00:38:16.502237 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.502358 kubelet[1806]: I0517 00:38:16.502344 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.502455 kubelet[1806]: I0517 00:38:16.502434 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.502516 kubelet[1806]: I0517 00:38:16.502443 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.505132 kubelet[1806]: I0517 00:38:16.505103 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:38:16.505867 kubelet[1806]: I0517 00:38:16.505275 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:16.508221 systemd[1]: var-lib-kubelet-pods-3e1b7f17\x2ddbc2\x2d4069\x2da940\x2d712425198af5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:38:16.509637 kubelet[1806]: I0517 00:38:16.509613 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:38:16.516927 kubelet[1806]: I0517 00:38:16.512330 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1b7f17-dbc2-4069-a940-712425198af5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:38:16.514113 systemd[1]: var-lib-kubelet-pods-3e1b7f17\x2ddbc2\x2d4069\x2da940\x2d712425198af5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:38:16.516902 systemd[1]: var-lib-kubelet-pods-3e1b7f17\x2ddbc2\x2d4069\x2da940\x2d712425198af5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkz5p.mount: Deactivated successfully. May 17 00:38:16.518198 kubelet[1806]: I0517 00:38:16.517891 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-kube-api-access-hkz5p" (OuterVolumeSpecName: "kube-api-access-hkz5p") pod "3e1b7f17-dbc2-4069-a940-712425198af5" (UID: "3e1b7f17-dbc2-4069-a940-712425198af5"). InnerVolumeSpecName "kube-api-access-hkz5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:38:16.605686 kubelet[1806]: I0517 00:38:16.602607 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cni-path\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.605686 kubelet[1806]: I0517 00:38:16.602667 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-net\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.605686 kubelet[1806]: I0517 00:38:16.602693 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hubble-tls\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.605686 kubelet[1806]: I0517 00:38:16.602725 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-bpf-maps\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.605686 kubelet[1806]: I0517 00:38:16.602753 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-config-path\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.605686 kubelet[1806]: I0517 00:38:16.602780 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-run\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606181 kubelet[1806]: I0517 00:38:16.602809 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hostproc\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606181 kubelet[1806]: I0517 00:38:16.602833 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-cgroup\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606181 kubelet[1806]: I0517 00:38:16.602857 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-xtables-lock\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606181 kubelet[1806]: I0517 00:38:16.602883 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-clustermesh-secrets\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606181 kubelet[1806]: I0517 00:38:16.602913 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-ipsec-secrets\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606181 kubelet[1806]: I0517 00:38:16.602946 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpql\" (UniqueName: \"kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-kube-api-access-9cpql\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606541 kubelet[1806]: I0517 00:38:16.602975 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-lib-modules\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606541 kubelet[1806]: I0517 00:38:16.603004 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-kernel\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606541 kubelet[1806]: I0517 00:38:16.603051 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-etc-cni-netd\") pod \"cilium-88fgc\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " pod="kube-system/cilium-88fgc" May 17 00:38:16.606541 kubelet[1806]: I0517 00:38:16.603087 1806 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-lib-modules\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606541 kubelet[1806]: I0517 00:38:16.603105 1806 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-cgroup\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606541 kubelet[1806]: I0517 00:38:16.603121 1806 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-hubble-tls\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606541 kubelet[1806]: I0517 00:38:16.603137 1806 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-net\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603169 1806 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-etc-cni-netd\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603186 1806 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-run\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603201 1806 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-cni-path\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603217 1806 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e1b7f17-dbc2-4069-a940-712425198af5-cilium-config-path\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603234 1806 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-hostproc\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603249 1806 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-host-proc-sys-kernel\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603265 1806 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e1b7f17-dbc2-4069-a940-712425198af5-clustermesh-secrets\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.606953 kubelet[1806]: I0517 00:38:16.603280 1806 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkz5p\" (UniqueName: \"kubernetes.io/projected/3e1b7f17-dbc2-4069-a940-712425198af5-kube-api-access-hkz5p\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.607431 kubelet[1806]: I0517 00:38:16.603295 1806 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-xtables-lock\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.607431 kubelet[1806]: I0517 00:38:16.603312 1806 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e1b7f17-dbc2-4069-a940-712425198af5-bpf-maps\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:16.622715 kubelet[1806]: I0517 00:38:16.622693 1806 scope.go:117] "RemoveContainer" containerID="60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712" May 17 00:38:16.624581 env[1411]: time="2025-05-17T00:38:16.624199570Z" level=info msg="RemoveContainer for \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\"" May 17 00:38:16.629911 systemd[1]: Removed slice kubepods-burstable-pod3e1b7f17_dbc2_4069_a940_712425198af5.slice. May 17 00:38:16.630034 systemd[1]: kubepods-burstable-pod3e1b7f17_dbc2_4069_a940_712425198af5.slice: Consumed 7.085s CPU time. May 17 00:38:16.636099 env[1411]: time="2025-05-17T00:38:16.636054365Z" level=info msg="RemoveContainer for \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\" returns successfully" May 17 00:38:16.636467 kubelet[1806]: I0517 00:38:16.636386 1806 scope.go:117] "RemoveContainer" containerID="5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0" May 17 00:38:16.638636 env[1411]: time="2025-05-17T00:38:16.638550485Z" level=info msg="RemoveContainer for \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\"" May 17 00:38:16.646852 env[1411]: time="2025-05-17T00:38:16.646812651Z" level=info msg="RemoveContainer for \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\" returns successfully" May 17 00:38:16.647014 kubelet[1806]: I0517 00:38:16.646990 1806 scope.go:117] "RemoveContainer" containerID="54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107" May 17 00:38:16.648107 env[1411]: time="2025-05-17T00:38:16.648042661Z" level=info msg="RemoveContainer for \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\"" May 17 00:38:16.655927 env[1411]: time="2025-05-17T00:38:16.655891724Z" level=info msg="RemoveContainer for \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\" returns successfully" May 17 00:38:16.656078 kubelet[1806]: I0517 00:38:16.656052 1806 scope.go:117] "RemoveContainer" containerID="d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086" May 17 00:38:16.657115 env[1411]: time="2025-05-17T00:38:16.657085034Z" level=info msg="RemoveContainer for \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\"" May 17 00:38:16.665444 env[1411]: time="2025-05-17T00:38:16.665409801Z" level=info msg="RemoveContainer for \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\" returns successfully" May 17 00:38:16.665600 kubelet[1806]: I0517 00:38:16.665571 1806 scope.go:117] "RemoveContainer" containerID="c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f" May 17 00:38:16.666552 env[1411]: time="2025-05-17T00:38:16.666525510Z" level=info msg="RemoveContainer for \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\"" May 17 00:38:16.673466 env[1411]: time="2025-05-17T00:38:16.673430565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wk62w,Uid:248de5ae-5b22-4632-a7b6-5b17a1431892,Namespace:kube-system,Attempt:0,}" May 17 00:38:16.675420 env[1411]: time="2025-05-17T00:38:16.675379881Z" level=info msg="RemoveContainer for \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\" returns successfully" May 17 00:38:16.675575 kubelet[1806]: I0517 00:38:16.675553 1806 scope.go:117] "RemoveContainer" containerID="60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712" May 17 00:38:16.675783 env[1411]: time="2025-05-17T00:38:16.675714984Z" level=error msg="ContainerStatus for \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\": not found" May 17 00:38:16.675940 kubelet[1806]: E0517 00:38:16.675921 1806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\": not found" containerID="60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712" May 17 00:38:16.676024 kubelet[1806]: I0517 00:38:16.675951 1806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712"} err="failed to get container status \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\": rpc error: code = NotFound desc = an error occurred when try to find container \"60647a77500b791474909f953ed362f05cd2c8b20c86931ea81ead2448640712\": not found" May 17 00:38:16.676024 kubelet[1806]: I0517 00:38:16.676003 1806 scope.go:117] "RemoveContainer" containerID="5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0" May 17 00:38:16.676245 env[1411]: time="2025-05-17T00:38:16.676195687Z" level=error msg="ContainerStatus for \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\": not found" May 17 00:38:16.676356 kubelet[1806]: E0517 00:38:16.676330 1806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\": not found" containerID="5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0" May 17 00:38:16.676428 kubelet[1806]: I0517 00:38:16.676375 1806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0"} err="failed to get container status \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c8cd5ee4ce8e4f1fab4828d30b0a11c3cb77e68ba66a36cd9032926db066df0\": not found" May 17 00:38:16.676428 kubelet[1806]: I0517 00:38:16.676400 1806 scope.go:117] "RemoveContainer" containerID="54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107" May 17 00:38:16.676610 env[1411]: time="2025-05-17T00:38:16.676556390Z" level=error msg="ContainerStatus for \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\": not found" May 17 00:38:16.676715 kubelet[1806]: E0517 00:38:16.676692 1806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\": not found" containerID="54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107" May 17 00:38:16.676786 kubelet[1806]: I0517 00:38:16.676717 1806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107"} err="failed to get container status \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\": rpc error: code = NotFound desc = an error occurred when try to find container \"54a6333c9eba7fb9d40d3617ef5a9cd8179d820b0a2a0871aca3756a41288107\": not found" May 17 00:38:16.676786 kubelet[1806]: I0517 00:38:16.676736 1806 scope.go:117] "RemoveContainer" containerID="d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086" May 17 00:38:16.676944 env[1411]: time="2025-05-17T00:38:16.676900393Z" level=error msg="ContainerStatus for \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\": not found" May 17 00:38:16.677054 kubelet[1806]: E0517 00:38:16.677030 1806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\": not found" containerID="d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086" May 17 00:38:16.677124 kubelet[1806]: I0517 00:38:16.677058 1806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086"} err="failed to get container status \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9457ce3ff68c481a2844cb4568548ed9ecf11859df0ebf15c1df1b721976086\": not found" May 17 00:38:16.677124 kubelet[1806]: I0517 00:38:16.677078 1806 scope.go:117] "RemoveContainer" containerID="c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f" May 17 00:38:16.677332 env[1411]: time="2025-05-17T00:38:16.677277696Z" level=error msg="ContainerStatus for \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\": not found" May 17 00:38:16.677450 kubelet[1806]: E0517 00:38:16.677425 1806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\": not found" containerID="c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f" May 17 00:38:16.677518 kubelet[1806]: I0517 00:38:16.677457 1806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f"} err="failed to get container status \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c713a2f47d7f538fc919e177bb529ee16b6da64c006174d2a813b139c591c62f\": not found" May 17 00:38:16.717428 env[1411]: time="2025-05-17T00:38:16.717347718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:16.717608 env[1411]: time="2025-05-17T00:38:16.717439019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:16.717608 env[1411]: time="2025-05-17T00:38:16.717467919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:16.717741 env[1411]: time="2025-05-17T00:38:16.717700321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b99d2b217099a40a079e9cc90f7930df82ecca4d3d71605aaf13fa8d83ce7e36 pid=3355 runtime=io.containerd.runc.v2 May 17 00:38:16.730591 systemd[1]: Started cri-containerd-b99d2b217099a40a079e9cc90f7930df82ecca4d3d71605aaf13fa8d83ce7e36.scope. May 17 00:38:16.770450 env[1411]: time="2025-05-17T00:38:16.770403045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wk62w,Uid:248de5ae-5b22-4632-a7b6-5b17a1431892,Namespace:kube-system,Attempt:0,} returns sandbox id \"b99d2b217099a40a079e9cc90f7930df82ecca4d3d71605aaf13fa8d83ce7e36\"" May 17 00:38:16.772785 env[1411]: time="2025-05-17T00:38:16.772756163Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:38:16.775747 env[1411]: time="2025-05-17T00:38:16.775716587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-88fgc,Uid:eabbfea2-39f6-4075-8f9c-91e8de2912a9,Namespace:kube-system,Attempt:0,}" May 17 00:38:16.829296 env[1411]: time="2025-05-17T00:38:16.829231717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:16.829296 env[1411]: time="2025-05-17T00:38:16.829265518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:16.829489 env[1411]: time="2025-05-17T00:38:16.829278718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:16.829736 env[1411]: time="2025-05-17T00:38:16.829667821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229 pid=3400 runtime=io.containerd.runc.v2 May 17 00:38:16.841546 systemd[1]: Started cri-containerd-2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229.scope. May 17 00:38:16.871187 env[1411]: time="2025-05-17T00:38:16.870828952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-88fgc,Uid:eabbfea2-39f6-4075-8f9c-91e8de2912a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229\"" May 17 00:38:16.883412 env[1411]: time="2025-05-17T00:38:16.883372053Z" level=info msg="CreateContainer within sandbox \"2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:38:16.917730 env[1411]: time="2025-05-17T00:38:16.917679028Z" level=info msg="CreateContainer within sandbox \"2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\"" May 17 00:38:16.918213 env[1411]: time="2025-05-17T00:38:16.918178832Z" level=info msg="StartContainer for \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\"" May 17 00:38:16.934386 systemd[1]: Started cri-containerd-a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca.scope. May 17 00:38:16.945359 systemd[1]: cri-containerd-a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca.scope: Deactivated successfully. May 17 00:38:16.945569 systemd[1]: Stopped cri-containerd-a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca.scope. May 17 00:38:16.973718 env[1411]: time="2025-05-17T00:38:16.973649178Z" level=info msg="shim disconnected" id=a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca May 17 00:38:16.973941 env[1411]: time="2025-05-17T00:38:16.973722879Z" level=warning msg="cleaning up after shim disconnected" id=a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca namespace=k8s.io May 17 00:38:16.973941 env[1411]: time="2025-05-17T00:38:16.973735579Z" level=info msg="cleaning up dead shim" May 17 00:38:16.981846 env[1411]: time="2025-05-17T00:38:16.981792544Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3457 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:38:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:38:16.982174 env[1411]: time="2025-05-17T00:38:16.982060346Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" May 17 00:38:16.983252 env[1411]: time="2025-05-17T00:38:16.983206955Z" level=error msg="Failed to pipe stdout of container \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\"" error="reading from a closed fifo" May 17 00:38:16.983441 env[1411]: time="2025-05-17T00:38:16.983379656Z" level=error msg="Failed to pipe stderr of container \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\"" error="reading from a closed fifo" May 17 00:38:16.988907 env[1411]: time="2025-05-17T00:38:16.988854700Z" level=error msg="StartContainer for \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:38:16.989178 kubelet[1806]: E0517 00:38:16.989108 1806 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca" May 17 00:38:16.989383 kubelet[1806]: E0517 00:38:16.989346 1806 kuberuntime_manager.go:1358] "Unhandled Error" err=< May 17 00:38:16.989383 kubelet[1806]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:38:16.989383 kubelet[1806]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:38:16.989383 kubelet[1806]: rm /hostbin/cilium-mount May 17 00:38:16.989558 kubelet[1806]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cpql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-88fgc_kube-system(eabbfea2-39f6-4075-8f9c-91e8de2912a9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:38:16.989558 kubelet[1806]: > logger="UnhandledError" May 17 00:38:16.990857 kubelet[1806]: E0517 00:38:16.990823 1806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-88fgc" podUID="eabbfea2-39f6-4075-8f9c-91e8de2912a9" May 17 00:38:17.371764 kubelet[1806]: E0517 00:38:17.371692 1806 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:17.418681 kubelet[1806]: E0517 00:38:17.418624 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:17.422725 env[1411]: time="2025-05-17T00:38:17.422687810Z" level=info msg="StopPodSandbox for \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\"" May 17 00:38:17.423089 env[1411]: time="2025-05-17T00:38:17.422778611Z" level=info msg="TearDown network for sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" successfully" May 17 00:38:17.423089 env[1411]: time="2025-05-17T00:38:17.422822511Z" level=info msg="StopPodSandbox for \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" returns successfully" May 17 00:38:17.424373 env[1411]: time="2025-05-17T00:38:17.423321315Z" level=info msg="RemovePodSandbox for \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\"" May 17 00:38:17.424373 env[1411]: time="2025-05-17T00:38:17.423349515Z" level=info msg="Forcibly stopping sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\"" May 17 00:38:17.424373 env[1411]: time="2025-05-17T00:38:17.423406115Z" level=info msg="TearDown network for sandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" successfully" May 17 00:38:17.431327 env[1411]: time="2025-05-17T00:38:17.431285877Z" level=info msg="RemovePodSandbox \"94b2f9eccc31be70fb1e9e0df0728f7c060ee7ce7c218dac895f19d540857700\" returns successfully" May 17 00:38:17.482587 kubelet[1806]: I0517 00:38:17.482535 1806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1b7f17-dbc2-4069-a940-712425198af5" path="/var/lib/kubelet/pods/3e1b7f17-dbc2-4069-a940-712425198af5/volumes" May 17 00:38:17.502618 kubelet[1806]: E0517 00:38:17.502559 1806 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:38:17.626874 env[1411]: time="2025-05-17T00:38:17.626736612Z" level=info msg="StopPodSandbox for \"2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229\"" May 17 00:38:17.626874 env[1411]: time="2025-05-17T00:38:17.626810913Z" level=info msg="Container to stop \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:17.630221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229-shm.mount: Deactivated successfully. May 17 00:38:17.637302 systemd[1]: cri-containerd-2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229.scope: Deactivated successfully. May 17 00:38:17.657883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229-rootfs.mount: Deactivated successfully. May 17 00:38:17.677127 env[1411]: time="2025-05-17T00:38:17.677078408Z" level=info msg="shim disconnected" id=2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229 May 17 00:38:17.677586 env[1411]: time="2025-05-17T00:38:17.677544611Z" level=warning msg="cleaning up after shim disconnected" id=2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229 namespace=k8s.io May 17 00:38:17.677803 env[1411]: time="2025-05-17T00:38:17.677773713Z" level=info msg="cleaning up dead shim" May 17 00:38:17.685472 env[1411]: time="2025-05-17T00:38:17.685436473Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3491 runtime=io.containerd.runc.v2\n" May 17 00:38:17.685777 env[1411]: time="2025-05-17T00:38:17.685743176Z" level=info msg="TearDown network for sandbox \"2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229\" successfully" May 17 00:38:17.685861 env[1411]: time="2025-05-17T00:38:17.685775476Z" level=info msg="StopPodSandbox for \"2aaf544c18f652ed03914b79eb7ced8456e9ff921e1fe75a6763fbdd13954229\" returns successfully" May 17 00:38:17.808881 kubelet[1806]: I0517 00:38:17.808827 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-cgroup\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.808881 kubelet[1806]: I0517 00:38:17.808881 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-net\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.808961 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hubble-tls\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809016 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-clustermesh-secrets\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809080 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hostproc\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809108 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-kernel\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809133 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cni-path\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809180 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-xtables-lock\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809210 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cpql\" (UniqueName: \"kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-kube-api-access-9cpql\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809241 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-ipsec-secrets\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809259 kubelet[1806]: I0517 00:38:17.809264 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-bpf-maps\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809894 kubelet[1806]: I0517 00:38:17.809288 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-etc-cni-netd\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809894 kubelet[1806]: I0517 00:38:17.809323 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-config-path\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809894 kubelet[1806]: I0517 00:38:17.809350 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-lib-modules\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809894 kubelet[1806]: I0517 00:38:17.809379 1806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-run\") pod \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\" (UID: \"eabbfea2-39f6-4075-8f9c-91e8de2912a9\") " May 17 00:38:17.809894 kubelet[1806]: I0517 00:38:17.809477 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.809894 kubelet[1806]: I0517 00:38:17.809518 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.809894 kubelet[1806]: I0517 00:38:17.809618 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.813160 kubelet[1806]: I0517 00:38:17.810811 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hostproc" (OuterVolumeSpecName: "hostproc") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.813160 kubelet[1806]: I0517 00:38:17.810865 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.813160 kubelet[1806]: I0517 00:38:17.810892 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cni-path" (OuterVolumeSpecName: "cni-path") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.813160 kubelet[1806]: I0517 00:38:17.810916 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.813160 kubelet[1806]: I0517 00:38:17.810942 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.813160 kubelet[1806]: I0517 00:38:17.810966 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.814015 kubelet[1806]: I0517 00:38:17.813983 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:38:17.814178 kubelet[1806]: I0517 00:38:17.814138 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:38:17.818340 systemd[1]: var-lib-kubelet-pods-eabbfea2\x2d39f6\x2d4075\x2d8f9c\x2d91e8de2912a9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:38:17.819508 kubelet[1806]: I0517 00:38:17.819049 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-kube-api-access-9cpql" (OuterVolumeSpecName: "kube-api-access-9cpql") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "kube-api-access-9cpql". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:38:17.822561 kubelet[1806]: I0517 00:38:17.822535 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:38:17.823310 systemd[1]: var-lib-kubelet-pods-eabbfea2\x2d39f6\x2d4075\x2d8f9c\x2d91e8de2912a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9cpql.mount: Deactivated successfully. May 17 00:38:17.824516 kubelet[1806]: I0517 00:38:17.824492 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:38:17.826539 kubelet[1806]: I0517 00:38:17.826510 1806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "eabbfea2-39f6-4075-8f9c-91e8de2912a9" (UID: "eabbfea2-39f6-4075-8f9c-91e8de2912a9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:38:17.910295 kubelet[1806]: I0517 00:38:17.910248 1806 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hostproc\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910295 kubelet[1806]: I0517 00:38:17.910288 1806 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-kernel\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910295 kubelet[1806]: I0517 00:38:17.910300 1806 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cni-path\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910312 1806 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-xtables-lock\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910323 1806 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9cpql\" (UniqueName: \"kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-kube-api-access-9cpql\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910333 1806 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-ipsec-secrets\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910343 1806 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-bpf-maps\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910353 1806 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-etc-cni-netd\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910364 1806 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-config-path\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910373 1806 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-lib-modules\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910383 1806 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-run\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910392 1806 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-cilium-cgroup\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910402 1806 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eabbfea2-39f6-4075-8f9c-91e8de2912a9-host-proc-sys-net\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910413 1806 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eabbfea2-39f6-4075-8f9c-91e8de2912a9-hubble-tls\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:17.910542 kubelet[1806]: I0517 00:38:17.910423 1806 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eabbfea2-39f6-4075-8f9c-91e8de2912a9-clustermesh-secrets\") on node \"10.200.4.30\" DevicePath \"\"" May 17 00:38:18.377005 systemd[1]: var-lib-kubelet-pods-eabbfea2\x2d39f6\x2d4075\x2d8f9c\x2d91e8de2912a9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:38:18.377161 systemd[1]: var-lib-kubelet-pods-eabbfea2\x2d39f6\x2d4075\x2d8f9c\x2d91e8de2912a9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:38:18.419242 kubelet[1806]: E0517 00:38:18.419202 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:18.634893 kubelet[1806]: I0517 00:38:18.634522 1806 scope.go:117] "RemoveContainer" containerID="a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca" May 17 00:38:18.637593 env[1411]: time="2025-05-17T00:38:18.637552839Z" level=info msg="RemoveContainer for \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\"" May 17 00:38:18.638545 systemd[1]: Removed slice kubepods-burstable-podeabbfea2_39f6_4075_8f9c_91e8de2912a9.slice. May 17 00:38:18.655565 env[1411]: time="2025-05-17T00:38:18.655531077Z" level=info msg="RemoveContainer for \"a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca\" returns successfully" May 17 00:38:18.706350 systemd[1]: Created slice kubepods-burstable-pod409bf648_76ad_4864_aeb1_17a569936da0.slice. May 17 00:38:18.816442 kubelet[1806]: I0517 00:38:18.816384 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-host-proc-sys-kernel\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816442 kubelet[1806]: I0517 00:38:18.816440 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-bpf-maps\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816469 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-hostproc\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816492 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/409bf648-76ad-4864-aeb1-17a569936da0-cilium-config-path\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816517 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-host-proc-sys-net\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816538 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/409bf648-76ad-4864-aeb1-17a569936da0-clustermesh-secrets\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816571 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-cilium-run\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816596 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-lib-modules\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816616 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-xtables-lock\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816637 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-cilium-cgroup\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816658 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-cni-path\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.816700 kubelet[1806]: I0517 00:38:18.816683 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td5h8\" (UniqueName: \"kubernetes.io/projected/409bf648-76ad-4864-aeb1-17a569936da0-kube-api-access-td5h8\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.817064 kubelet[1806]: I0517 00:38:18.816704 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409bf648-76ad-4864-aeb1-17a569936da0-etc-cni-netd\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.817064 kubelet[1806]: I0517 00:38:18.816748 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/409bf648-76ad-4864-aeb1-17a569936da0-hubble-tls\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.817064 kubelet[1806]: I0517 00:38:18.816771 1806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/409bf648-76ad-4864-aeb1-17a569936da0-cilium-ipsec-secrets\") pod \"cilium-r6dfm\" (UID: \"409bf648-76ad-4864-aeb1-17a569936da0\") " pod="kube-system/cilium-r6dfm" May 17 00:38:18.968116 kubelet[1806]: I0517 00:38:18.968021 1806 setters.go:618] "Node became not ready" node="10.200.4.30" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:38:18Z","lastTransitionTime":"2025-05-17T00:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:38:19.013603 env[1411]: time="2025-05-17T00:38:19.013550124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r6dfm,Uid:409bf648-76ad-4864-aeb1-17a569936da0,Namespace:kube-system,Attempt:0,}" May 17 00:38:19.054406 env[1411]: time="2025-05-17T00:38:19.054332630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:19.054617 env[1411]: time="2025-05-17T00:38:19.054368330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:19.054617 env[1411]: time="2025-05-17T00:38:19.054382130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:19.054617 env[1411]: time="2025-05-17T00:38:19.054519031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb pid=3520 runtime=io.containerd.runc.v2 May 17 00:38:19.067660 systemd[1]: Started cri-containerd-3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb.scope. May 17 00:38:19.093242 env[1411]: time="2025-05-17T00:38:19.093186322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r6dfm,Uid:409bf648-76ad-4864-aeb1-17a569936da0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\"" May 17 00:38:19.100635 env[1411]: time="2025-05-17T00:38:19.100593777Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:38:19.139195 env[1411]: time="2025-05-17T00:38:19.139130966Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c\"" May 17 00:38:19.139862 env[1411]: time="2025-05-17T00:38:19.139826572Z" level=info msg="StartContainer for \"3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c\"" May 17 00:38:19.155294 systemd[1]: Started cri-containerd-3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c.scope. May 17 00:38:19.187785 env[1411]: time="2025-05-17T00:38:19.187732731Z" level=info msg="StartContainer for \"3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c\" returns successfully" May 17 00:38:19.192244 systemd[1]: cri-containerd-3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c.scope: Deactivated successfully. May 17 00:38:19.258245 env[1411]: time="2025-05-17T00:38:19.258087359Z" level=info msg="shim disconnected" id=3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c May 17 00:38:19.258245 env[1411]: time="2025-05-17T00:38:19.258169060Z" level=warning msg="cleaning up after shim disconnected" id=3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c namespace=k8s.io May 17 00:38:19.258245 env[1411]: time="2025-05-17T00:38:19.258183960Z" level=info msg="cleaning up dead shim" May 17 00:38:19.267258 env[1411]: time="2025-05-17T00:38:19.267219627Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3605 runtime=io.containerd.runc.v2\n" May 17 00:38:19.420242 kubelet[1806]: E0517 00:38:19.420188 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:19.483169 kubelet[1806]: I0517 00:38:19.483089 1806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eabbfea2-39f6-4075-8f9c-91e8de2912a9" path="/var/lib/kubelet/pods/eabbfea2-39f6-4075-8f9c-91e8de2912a9/volumes" May 17 00:38:19.651658 env[1411]: time="2025-05-17T00:38:19.651597012Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:38:19.713202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752739011.mount: Deactivated successfully. May 17 00:38:19.748936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389683227.mount: Deactivated successfully. May 17 00:38:19.768494 env[1411]: time="2025-05-17T00:38:19.768438188Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772\"" May 17 00:38:19.769120 env[1411]: time="2025-05-17T00:38:19.769033593Z" level=info msg="StartContainer for \"39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772\"" May 17 00:38:19.796628 systemd[1]: Started cri-containerd-39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772.scope. May 17 00:38:19.840597 env[1411]: time="2025-05-17T00:38:19.840536229Z" level=info msg="StartContainer for \"39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772\" returns successfully" May 17 00:38:19.843020 systemd[1]: cri-containerd-39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772.scope: Deactivated successfully. May 17 00:38:19.879281 env[1411]: time="2025-05-17T00:38:19.879233620Z" level=info msg="shim disconnected" id=39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772 May 17 00:38:19.879281 env[1411]: time="2025-05-17T00:38:19.879279620Z" level=warning msg="cleaning up after shim disconnected" id=39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772 namespace=k8s.io May 17 00:38:19.879281 env[1411]: time="2025-05-17T00:38:19.879290220Z" level=info msg="cleaning up dead shim" May 17 00:38:19.887493 env[1411]: time="2025-05-17T00:38:19.887447181Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3669 runtime=io.containerd.runc.v2\n" May 17 00:38:20.080478 kubelet[1806]: W0517 00:38:20.080354 1806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeabbfea2_39f6_4075_8f9c_91e8de2912a9.slice/cri-containerd-a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca.scope WatchSource:0}: container "a859bd6905bbcbe83786c5ecfc44b03da482d0746772837e49d16f0fa9e956ca" in namespace "k8s.io": not found May 17 00:38:20.421295 kubelet[1806]: E0517 00:38:20.421220 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:20.501328 env[1411]: time="2025-05-17T00:38:20.501264204Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:20.506794 env[1411]: time="2025-05-17T00:38:20.506734444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:20.511642 env[1411]: time="2025-05-17T00:38:20.511592179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:38:20.512338 env[1411]: time="2025-05-17T00:38:20.512304185Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:38:20.519961 env[1411]: time="2025-05-17T00:38:20.519930640Z" level=info msg="CreateContainer within sandbox \"b99d2b217099a40a079e9cc90f7930df82ecca4d3d71605aaf13fa8d83ce7e36\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:38:20.567428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158635252.mount: Deactivated successfully. May 17 00:38:20.581639 env[1411]: time="2025-05-17T00:38:20.581598193Z" level=info msg="CreateContainer within sandbox \"b99d2b217099a40a079e9cc90f7930df82ecca4d3d71605aaf13fa8d83ce7e36\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"55ffc06a6cf367d7647438638a149c1001736efad9288e316bddc40f2d3dbf64\"" May 17 00:38:20.582100 env[1411]: time="2025-05-17T00:38:20.582070796Z" level=info msg="StartContainer for \"55ffc06a6cf367d7647438638a149c1001736efad9288e316bddc40f2d3dbf64\"" May 17 00:38:20.599512 systemd[1]: Started cri-containerd-55ffc06a6cf367d7647438638a149c1001736efad9288e316bddc40f2d3dbf64.scope. May 17 00:38:20.628128 env[1411]: time="2025-05-17T00:38:20.628056034Z" level=info msg="StartContainer for \"55ffc06a6cf367d7647438638a149c1001736efad9288e316bddc40f2d3dbf64\" returns successfully" May 17 00:38:20.655565 kubelet[1806]: I0517 00:38:20.655503 1806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wk62w" podStartSLOduration=0.914417903 podStartE2EDuration="4.655485035s" podCreationTimestamp="2025-05-17 00:38:16 +0000 UTC" firstStartedPulling="2025-05-17 00:38:16.77226726 +0000 UTC m=+60.093829550" lastFinishedPulling="2025-05-17 00:38:20.513334392 +0000 UTC m=+63.834896682" observedRunningTime="2025-05-17 00:38:20.655195833 +0000 UTC m=+63.976758123" watchObservedRunningTime="2025-05-17 00:38:20.655485035 +0000 UTC m=+63.977047325" May 17 00:38:20.656576 env[1411]: time="2025-05-17T00:38:20.656532943Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:38:20.700078 env[1411]: time="2025-05-17T00:38:20.699959761Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71\"" May 17 00:38:20.701371 env[1411]: time="2025-05-17T00:38:20.701339471Z" level=info msg="StartContainer for \"65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71\"" May 17 00:38:20.727838 systemd[1]: Started cri-containerd-65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71.scope. May 17 00:38:20.781505 systemd[1]: cri-containerd-65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71.scope: Deactivated successfully. May 17 00:38:20.782859 env[1411]: time="2025-05-17T00:38:20.782786569Z" level=info msg="StartContainer for \"65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71\" returns successfully" May 17 00:38:21.277233 env[1411]: time="2025-05-17T00:38:21.277177653Z" level=info msg="shim disconnected" id=65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71 May 17 00:38:21.277560 env[1411]: time="2025-05-17T00:38:21.277293454Z" level=warning msg="cleaning up after shim disconnected" id=65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71 namespace=k8s.io May 17 00:38:21.277560 env[1411]: time="2025-05-17T00:38:21.277310854Z" level=info msg="cleaning up dead shim" May 17 00:38:21.284890 env[1411]: time="2025-05-17T00:38:21.284846708Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3764 runtime=io.containerd.runc.v2\n" May 17 00:38:21.421807 kubelet[1806]: E0517 00:38:21.421752 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:21.659525 env[1411]: time="2025-05-17T00:38:21.659475296Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:38:21.693834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742621538.mount: Deactivated successfully. May 17 00:38:21.710887 env[1411]: time="2025-05-17T00:38:21.710830865Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331\"" May 17 00:38:21.711529 env[1411]: time="2025-05-17T00:38:21.711396669Z" level=info msg="StartContainer for \"51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331\"" May 17 00:38:21.729071 systemd[1]: Started cri-containerd-51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331.scope. May 17 00:38:21.761878 systemd[1]: cri-containerd-51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331.scope: Deactivated successfully. May 17 00:38:21.768906 env[1411]: time="2025-05-17T00:38:21.768863281Z" level=info msg="StartContainer for \"51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331\" returns successfully" May 17 00:38:21.799895 env[1411]: time="2025-05-17T00:38:21.799835803Z" level=info msg="shim disconnected" id=51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331 May 17 00:38:21.799895 env[1411]: time="2025-05-17T00:38:21.799895304Z" level=warning msg="cleaning up after shim disconnected" id=51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331 namespace=k8s.io May 17 00:38:21.800214 env[1411]: time="2025-05-17T00:38:21.799908504Z" level=info msg="cleaning up dead shim" May 17 00:38:21.807619 env[1411]: time="2025-05-17T00:38:21.807575559Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3818 runtime=io.containerd.runc.v2\n" May 17 00:38:22.377560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331-rootfs.mount: Deactivated successfully. May 17 00:38:22.422659 kubelet[1806]: E0517 00:38:22.422596 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:22.503794 kubelet[1806]: E0517 00:38:22.503687 1806 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:38:22.664659 env[1411]: time="2025-05-17T00:38:22.664608104Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:38:22.698038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543392650.mount: Deactivated successfully. May 17 00:38:22.717306 env[1411]: time="2025-05-17T00:38:22.717251674Z" level=info msg="CreateContainer within sandbox \"3145dd846f69bdc5359b9a9a15a5788967d09ef396f048c07f8b37655053dbcb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"55e3f7bff211368c9c90d92f4dc457fc7577f4479c0fcfc3165319462a28c4fb\"" May 17 00:38:22.717942 env[1411]: time="2025-05-17T00:38:22.717827678Z" level=info msg="StartContainer for \"55e3f7bff211368c9c90d92f4dc457fc7577f4479c0fcfc3165319462a28c4fb\"" May 17 00:38:22.735418 systemd[1]: Started cri-containerd-55e3f7bff211368c9c90d92f4dc457fc7577f4479c0fcfc3165319462a28c4fb.scope. May 17 00:38:22.772972 env[1411]: time="2025-05-17T00:38:22.772920764Z" level=info msg="StartContainer for \"55e3f7bff211368c9c90d92f4dc457fc7577f4479c0fcfc3165319462a28c4fb\" returns successfully" May 17 00:38:23.120256 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:38:23.202672 kubelet[1806]: W0517 00:38:23.202517 1806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod409bf648_76ad_4864_aeb1_17a569936da0.slice/cri-containerd-3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c.scope WatchSource:0}: task 3d019531625d312c0045200f2c34436273efee8a0b6a9883348e9987c08e9a5c not found May 17 00:38:23.423755 kubelet[1806]: E0517 00:38:23.423692 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:23.687609 kubelet[1806]: I0517 00:38:23.687542 1806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r6dfm" podStartSLOduration=5.6875276790000004 podStartE2EDuration="5.687527679s" podCreationTimestamp="2025-05-17 00:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:38:23.687402578 +0000 UTC m=+67.008964868" watchObservedRunningTime="2025-05-17 00:38:23.687527679 +0000 UTC m=+67.009089969" May 17 00:38:24.424620 kubelet[1806]: E0517 00:38:24.424509 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:25.425786 kubelet[1806]: E0517 00:38:25.425737 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:25.857133 systemd-networkd[1556]: lxc_health: Link UP May 17 00:38:25.873210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:38:25.873491 systemd-networkd[1556]: lxc_health: Gained carrier May 17 00:38:26.314204 kubelet[1806]: W0517 00:38:26.314155 1806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod409bf648_76ad_4864_aeb1_17a569936da0.slice/cri-containerd-39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772.scope WatchSource:0}: task 39fbe73d643cef0d5e98307f97a3c1b6ff110832216d27def473a4ddccbe8772 not found May 17 00:38:26.426727 kubelet[1806]: E0517 00:38:26.426677 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:27.427061 kubelet[1806]: E0517 00:38:27.427009 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:27.544409 systemd-networkd[1556]: lxc_health: Gained IPv6LL May 17 00:38:28.427889 kubelet[1806]: E0517 00:38:28.427831 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:29.255582 systemd[1]: run-containerd-runc-k8s.io-55e3f7bff211368c9c90d92f4dc457fc7577f4479c0fcfc3165319462a28c4fb-runc.EuNgub.mount: Deactivated successfully. May 17 00:38:29.424841 kubelet[1806]: W0517 00:38:29.424784 1806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod409bf648_76ad_4864_aeb1_17a569936da0.slice/cri-containerd-65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71.scope WatchSource:0}: task 65156ace025d018c60dc6ef70649f8479f62a8d8bc3432451fd4a201322a2c71 not found May 17 00:38:29.428957 kubelet[1806]: E0517 00:38:29.428918 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:30.429955 kubelet[1806]: E0517 00:38:30.429889 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:31.423028 systemd[1]: run-containerd-runc-k8s.io-55e3f7bff211368c9c90d92f4dc457fc7577f4479c0fcfc3165319462a28c4fb-runc.uWzrYI.mount: Deactivated successfully. May 17 00:38:31.431073 kubelet[1806]: E0517 00:38:31.431013 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:32.431357 kubelet[1806]: E0517 00:38:32.431292 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:32.533990 kubelet[1806]: W0517 00:38:32.533942 1806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod409bf648_76ad_4864_aeb1_17a569936da0.slice/cri-containerd-51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331.scope WatchSource:0}: task 51416c2a7239f776621f576209a663e7c1772f8878c784e873e2f28fe3996331 not found May 17 00:38:33.431950 kubelet[1806]: E0517 00:38:33.431885 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:33.615803 systemd[1]: run-containerd-runc-k8s.io-55e3f7bff211368c9c90d92f4dc457fc7577f4479c0fcfc3165319462a28c4fb-runc.d6ropt.mount: Deactivated successfully. May 17 00:38:34.432419 kubelet[1806]: E0517 00:38:34.432361 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:35.432763 kubelet[1806]: E0517 00:38:35.432716 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:36.433558 kubelet[1806]: E0517 00:38:36.433502 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:37.371394 kubelet[1806]: E0517 00:38:37.371342 1806 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:38:37.436636 kubelet[1806]: E0517 00:38:37.436552 1806 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"