Jun 20 18:50:26.052473 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 18:50:26.052517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:50:26.052532 kernel: BIOS-provided physical RAM map: Jun 20 18:50:26.052545 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 18:50:26.052558 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 18:50:26.052585 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 20 18:50:26.052602 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jun 20 18:50:26.052616 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 18:50:26.052636 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 18:50:26.052649 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 18:50:26.052661 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 18:50:26.052675 kernel: printk: bootconsole [earlyser0] enabled Jun 20 18:50:26.052688 kernel: NX (Execute Disable) protection: active Jun 20 18:50:26.052703 kernel: APIC: Static calls initialized Jun 20 18:50:26.052727 kernel: efi: EFI v2.7 by Microsoft Jun 20 18:50:26.052742 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jun 20 18:50:26.052758 kernel: random: crng init done Jun 20 18:50:26.052775 kernel: secureboot: Secure boot disabled Jun 20 18:50:26.052790 kernel: SMBIOS 3.1.0 present. Jun 20 18:50:26.052804 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 20 18:50:26.052820 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 18:50:26.052836 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 20 18:50:26.052847 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jun 20 18:50:26.055594 kernel: Hyper-V: Nested features: 0x1e0101 Jun 20 18:50:26.055621 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 18:50:26.055635 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 18:50:26.055648 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 18:50:26.055661 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 18:50:26.055674 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 20 18:50:26.055687 kernel: tsc: Detected 2593.906 MHz processor Jun 20 18:50:26.055700 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 18:50:26.055713 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 18:50:26.055726 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 20 18:50:26.055742 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 18:50:26.055755 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 18:50:26.055768 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 20 18:50:26.055780 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 20 18:50:26.055793 kernel: Using GB pages for direct mapping Jun 20 18:50:26.055805 kernel: ACPI: Early table checksum verification disabled Jun 20 18:50:26.055818 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 18:50:26.055837 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055854 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055867 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 20 18:50:26.055881 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 18:50:26.055895 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055909 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055921 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055937 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055951 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055965 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055978 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:50:26.055992 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 18:50:26.056005 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 20 18:50:26.056020 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 18:50:26.056033 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 18:50:26.056047 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 18:50:26.056064 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 18:50:26.056078 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 20 18:50:26.056091 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 20 18:50:26.056105 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 18:50:26.056117 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 20 18:50:26.056131 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 20 18:50:26.056144 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 20 18:50:26.056157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 20 18:50:26.056172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 20 18:50:26.056189 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 20 18:50:26.056203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 20 18:50:26.056217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 20 18:50:26.056231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 20 18:50:26.056244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 20 18:50:26.056258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 20 18:50:26.056271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 20 18:50:26.056285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 20 18:50:26.056302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 20 18:50:26.056316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 20 18:50:26.056329 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 20 18:50:26.056342 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 20 18:50:26.056356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 20 18:50:26.056369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 20 18:50:26.056383 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 20 18:50:26.056397 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 20 18:50:26.056411 kernel: Zone ranges: Jun 20 18:50:26.056427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 18:50:26.056441 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 18:50:26.056455 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 18:50:26.056469 kernel: Movable zone start for each node Jun 20 18:50:26.056482 kernel: Early memory node ranges Jun 20 18:50:26.056495 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 18:50:26.056509 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 20 18:50:26.056523 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 18:50:26.056536 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 18:50:26.056554 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 18:50:26.056588 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 18:50:26.056601 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 18:50:26.056613 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 20 18:50:26.056625 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 18:50:26.056636 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 20 18:50:26.056645 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 20 18:50:26.056657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:50:26.056667 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 18:50:26.056681 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 18:50:26.056690 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:50:26.056701 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 18:50:26.056711 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 18:50:26.056722 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 18:50:26.056732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:50:26.056742 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 18:50:26.056752 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 18:50:26.056762 kernel: pcpu-alloc: [0] 0 1 Jun 20 18:50:26.056774 kernel: Hyper-V: PV spinlocks enabled Jun 20 18:50:26.056784 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 18:50:26.056796 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:50:26.056804 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:50:26.056812 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 18:50:26.056819 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:50:26.056829 kernel: Fallback order for Node 0: 0 Jun 20 18:50:26.056836 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 20 18:50:26.056847 kernel: Policy zone: Normal Jun 20 18:50:26.056862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:50:26.056870 kernel: software IO TLB: area num 2. Jun 20 18:50:26.056884 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 312164K reserved, 0K cma-reserved) Jun 20 18:50:26.056892 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:50:26.056903 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 18:50:26.056912 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 18:50:26.056922 kernel: Dynamic Preempt: voluntary Jun 20 18:50:26.056930 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:50:26.056939 kernel: rcu: RCU event tracing is enabled. Jun 20 18:50:26.056950 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:50:26.056962 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:50:26.056972 kernel: Rude variant of Tasks RCU enabled. Jun 20 18:50:26.056981 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:50:26.056991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:50:26.057000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:50:26.057011 kernel: Using NULL legacy PIC Jun 20 18:50:26.057022 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 18:50:26.057033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:50:26.057041 kernel: Console: colour dummy device 80x25 Jun 20 18:50:26.057053 kernel: printk: console [tty1] enabled Jun 20 18:50:26.057061 kernel: printk: console [ttyS0] enabled Jun 20 18:50:26.057072 kernel: printk: bootconsole [earlyser0] disabled Jun 20 18:50:26.057082 kernel: ACPI: Core revision 20230628 Jun 20 18:50:26.057093 kernel: Failed to register legacy timer interrupt Jun 20 18:50:26.057104 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 18:50:26.057118 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 18:50:26.057128 kernel: Hyper-V: Using IPI hypercalls Jun 20 18:50:26.057139 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 18:50:26.057148 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 18:50:26.057162 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 18:50:26.057171 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 18:50:26.057180 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 18:50:26.057190 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 18:50:26.057199 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jun 20 18:50:26.057212 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 18:50:26.057220 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 18:50:26.057231 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 18:50:26.057239 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 18:50:26.057250 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 18:50:26.057258 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 18:50:26.057268 kernel: RETBleed: Vulnerable Jun 20 18:50:26.057277 kernel: Speculative Store Bypass: Vulnerable Jun 20 18:50:26.057286 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:50:26.057296 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:50:26.057307 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 18:50:26.057317 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 18:50:26.057326 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 18:50:26.057337 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 18:50:26.057345 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 18:50:26.057356 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 18:50:26.057364 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 18:50:26.057375 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 18:50:26.057383 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 18:50:26.057395 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 18:50:26.057403 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 18:50:26.057415 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 20 18:50:26.057425 kernel: Freeing SMP alternatives memory: 32K Jun 20 18:50:26.057435 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:50:26.057444 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:50:26.057454 kernel: landlock: Up and running. Jun 20 18:50:26.057463 kernel: SELinux: Initializing. Jun 20 18:50:26.057474 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:50:26.057485 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:50:26.057495 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 20 18:50:26.057505 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:50:26.057516 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:50:26.057529 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:50:26.057539 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 20 18:50:26.057548 kernel: signal: max sigframe size: 3632 Jun 20 18:50:26.057558 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:50:26.057568 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:50:26.059041 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 18:50:26.059054 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:50:26.059065 kernel: smpboot: x86: Booting SMP configuration: Jun 20 18:50:26.059077 kernel: .... node #0, CPUs: #1 Jun 20 18:50:26.059095 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 20 18:50:26.059109 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 20 18:50:26.059123 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:50:26.059135 kernel: smpboot: Max logical packages: 1 Jun 20 18:50:26.059149 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 20 18:50:26.059171 kernel: devtmpfs: initialized Jun 20 18:50:26.059185 kernel: x86/mm: Memory block size: 128MB Jun 20 18:50:26.059199 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 18:50:26.059218 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:50:26.059233 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:50:26.059248 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:50:26.059263 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:50:26.059277 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:50:26.059293 kernel: audit: type=2000 audit(1750445425.030:1): state=initialized audit_enabled=0 res=1 Jun 20 18:50:26.059308 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:50:26.059323 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 18:50:26.059335 kernel: cpuidle: using governor menu Jun 20 18:50:26.059352 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:50:26.059365 kernel: dca service started, version 1.12.1 Jun 20 18:50:26.059379 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 20 18:50:26.059394 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 18:50:26.059408 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:50:26.059423 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:50:26.059435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:50:26.059449 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:50:26.059463 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:50:26.059481 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:50:26.059497 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:50:26.059509 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:50:26.059523 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 18:50:26.059538 kernel: ACPI: Interpreter enabled Jun 20 18:50:26.059552 kernel: ACPI: PM: (supports S0 S5) Jun 20 18:50:26.059565 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:50:26.059600 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 18:50:26.059615 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 18:50:26.059632 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 18:50:26.059646 kernel: iommu: Default domain type: Translated Jun 20 18:50:26.059659 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 18:50:26.059674 kernel: efivars: Registered efivars operations Jun 20 18:50:26.059689 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:50:26.059703 kernel: PCI: System does not support PCI Jun 20 18:50:26.059717 kernel: vgaarb: loaded Jun 20 18:50:26.059732 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 20 18:50:26.059747 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:50:26.059765 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:50:26.059779 kernel: pnp: PnP ACPI init Jun 20 18:50:26.059795 kernel: pnp: PnP ACPI: found 3 devices Jun 20 18:50:26.059809 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 18:50:26.059824 kernel: NET: Registered PF_INET protocol family Jun 20 18:50:26.059839 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 18:50:26.059854 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 18:50:26.059869 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:50:26.059884 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:50:26.059901 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 18:50:26.059916 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 18:50:26.059931 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 18:50:26.059945 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 18:50:26.059960 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:50:26.059975 kernel: NET: Registered PF_XDP protocol family Jun 20 18:50:26.059989 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:50:26.060003 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 18:50:26.060017 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jun 20 18:50:26.060034 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 18:50:26.060048 kernel: Initialise system trusted keyrings Jun 20 18:50:26.060062 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 18:50:26.060078 kernel: Key type asymmetric registered Jun 20 18:50:26.060092 kernel: Asymmetric key parser 'x509' registered Jun 20 18:50:26.060104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 18:50:26.060117 kernel: io scheduler mq-deadline registered Jun 20 18:50:26.060131 kernel: io scheduler kyber registered Jun 20 18:50:26.060143 kernel: io scheduler bfq registered Jun 20 18:50:26.060160 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 18:50:26.060173 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:50:26.060187 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 18:50:26.060202 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 18:50:26.060216 kernel: i8042: PNP: No PS/2 controller found. Jun 20 18:50:26.060409 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 18:50:26.060550 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T18:50:25 UTC (1750445425) Jun 20 18:50:26.062719 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 18:50:26.062746 kernel: intel_pstate: CPU model not supported Jun 20 18:50:26.062759 kernel: efifb: probing for efifb Jun 20 18:50:26.062768 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 18:50:26.062779 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 18:50:26.062791 kernel: efifb: scrolling: redraw Jun 20 18:50:26.062802 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:50:26.062813 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:50:26.062823 kernel: fb0: EFI VGA frame buffer device Jun 20 18:50:26.062833 kernel: pstore: Using crash dump compression: deflate Jun 20 18:50:26.062847 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 18:50:26.062858 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:50:26.062867 kernel: Segment Routing with IPv6 Jun 20 18:50:26.062875 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:50:26.062886 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:50:26.062895 kernel: Key type dns_resolver registered Jun 20 18:50:26.062907 kernel: IPI shorthand broadcast: enabled Jun 20 18:50:26.062916 kernel: sched_clock: Marking stable (862004200, 41085500)->(1110808500, -207718800) Jun 20 18:50:26.062927 kernel: registered taskstats version 1 Jun 20 18:50:26.062938 kernel: Loading compiled-in X.509 certificates Jun 20 18:50:26.062946 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 18:50:26.062954 kernel: Key type .fscrypt registered Jun 20 18:50:26.062964 kernel: Key type fscrypt-provisioning registered Jun 20 18:50:26.062973 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:50:26.062982 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:50:26.062993 kernel: ima: No architecture policies found Jun 20 18:50:26.063002 kernel: clk: Disabling unused clocks Jun 20 18:50:26.063013 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 18:50:26.063023 kernel: Write protecting the kernel read-only data: 38912k Jun 20 18:50:26.063031 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 18:50:26.063039 kernel: Run /init as init process Jun 20 18:50:26.063050 kernel: with arguments: Jun 20 18:50:26.063059 kernel: /init Jun 20 18:50:26.063067 kernel: with environment: Jun 20 18:50:26.063078 kernel: HOME=/ Jun 20 18:50:26.063086 kernel: TERM=linux Jun 20 18:50:26.063094 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:50:26.063109 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:50:26.063123 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:50:26.063134 systemd[1]: Detected virtualization microsoft. Jun 20 18:50:26.063144 systemd[1]: Detected architecture x86-64. Jun 20 18:50:26.063155 systemd[1]: Running in initrd. Jun 20 18:50:26.063164 systemd[1]: No hostname configured, using default hostname. Jun 20 18:50:26.063178 systemd[1]: Hostname set to . Jun 20 18:50:26.063189 systemd[1]: Initializing machine ID from random generator. Jun 20 18:50:26.063200 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:50:26.063210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:50:26.063223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:50:26.063232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:50:26.063244 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:50:26.063253 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:50:26.063268 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:50:26.063280 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:50:26.063290 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:50:26.063302 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:50:26.063311 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:50:26.063323 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:50:26.063332 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:50:26.063344 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:50:26.063356 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:50:26.063367 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:50:26.063376 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:50:26.063387 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:50:26.063396 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:50:26.063408 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:50:26.063417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:50:26.063429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:50:26.063438 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:50:26.063452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:50:26.063463 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:50:26.063473 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:50:26.063484 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:50:26.063494 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:50:26.063505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:50:26.063533 systemd-journald[177]: Collecting audit messages is disabled. Jun 20 18:50:26.063561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:50:26.063588 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:50:26.063601 systemd-journald[177]: Journal started Jun 20 18:50:26.063627 systemd-journald[177]: Runtime Journal (/run/log/journal/546566e4743a4b03a59cbba89feed2dc) is 8M, max 158.8M, 150.8M free. Jun 20 18:50:26.073600 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:50:26.076486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:50:26.079728 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:50:26.089033 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:50:26.100487 systemd-modules-load[179]: Inserted module 'overlay' Jun 20 18:50:26.108193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:50:26.112027 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:50:26.139628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:50:26.158553 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:50:26.158608 kernel: Bridge firewalling registered Jun 20 18:50:26.153482 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:50:26.157649 systemd-modules-load[179]: Inserted module 'br_netfilter' Jun 20 18:50:26.166827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:50:26.170066 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:50:26.173172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:50:26.182135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:50:26.190893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:50:26.204715 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:50:26.210744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:50:26.222980 dracut-cmdline[211]: dracut-dracut-053 Jun 20 18:50:26.225926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:50:26.233181 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:50:26.247786 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:50:26.295009 systemd-resolved[229]: Positive Trust Anchors: Jun 20 18:50:26.295024 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:50:26.295079 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:50:26.320209 systemd-resolved[229]: Defaulting to hostname 'linux'. Jun 20 18:50:26.321527 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:50:26.324442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:50:26.337586 kernel: SCSI subsystem initialized Jun 20 18:50:26.347587 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:50:26.358590 kernel: iscsi: registered transport (tcp) Jun 20 18:50:26.379866 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:50:26.379966 kernel: QLogic iSCSI HBA Driver Jun 20 18:50:26.416334 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:50:26.423741 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:50:26.452031 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:50:26.452142 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:50:26.455350 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:50:26.495617 kernel: raid6: avx512x4 gen() 18420 MB/s Jun 20 18:50:26.513587 kernel: raid6: avx512x2 gen() 18155 MB/s Jun 20 18:50:26.532586 kernel: raid6: avx512x1 gen() 18159 MB/s Jun 20 18:50:26.551586 kernel: raid6: avx2x4 gen() 18209 MB/s Jun 20 18:50:26.570592 kernel: raid6: avx2x2 gen() 18136 MB/s Jun 20 18:50:26.590259 kernel: raid6: avx2x1 gen() 13875 MB/s Jun 20 18:50:26.590318 kernel: raid6: using algorithm avx512x4 gen() 18420 MB/s Jun 20 18:50:26.611633 kernel: raid6: .... xor() 7761 MB/s, rmw enabled Jun 20 18:50:26.611670 kernel: raid6: using avx512x2 recovery algorithm Jun 20 18:50:26.633607 kernel: xor: automatically using best checksumming function avx Jun 20 18:50:26.775599 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:50:26.785406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:50:26.795738 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:50:26.814020 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jun 20 18:50:26.819243 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:50:26.831761 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:50:26.844704 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jun 20 18:50:26.872547 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:50:26.879827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:50:26.921997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:50:26.937994 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:50:26.968716 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:50:26.969610 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:50:26.970933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:50:26.971027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:50:26.984969 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:50:27.006417 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:50:27.026400 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 18:50:27.026465 kernel: hv_vmbus: Vmbus version:5.2 Jun 20 18:50:27.049024 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:50:27.056837 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 18:50:27.049177 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:50:27.062540 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:50:27.067215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:50:27.083281 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 18:50:27.083344 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:50:27.084414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:50:27.090506 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:50:27.100695 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 18:50:27.100731 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 18:50:27.105054 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 18:50:27.104952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:50:27.153147 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 18:50:27.153372 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 18:50:27.153392 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 18:50:27.153408 kernel: AES CTR mode by8 optimization enabled Jun 20 18:50:27.153426 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 18:50:27.153445 kernel: scsi host0: storvsc_host_t Jun 20 18:50:27.157054 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 20 18:50:27.157263 kernel: scsi host1: storvsc_host_t Jun 20 18:50:27.157437 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 20 18:50:27.159747 kernel: PTP clock support registered Jun 20 18:50:27.159772 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 18:50:27.111979 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:50:27.152046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:50:27.152160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:50:27.168908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:50:27.199094 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 18:50:27.199158 kernel: hv_vmbus: registering driver hv_utils Jun 20 18:50:27.205062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:50:27.219424 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 18:50:27.219488 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 18:50:28.201669 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 18:50:28.201874 systemd-resolved[229]: Clock change detected. Flushing caches. Jun 20 18:50:28.202584 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:50:28.217613 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 18:50:28.217851 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:50:28.220944 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 18:50:28.233198 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 20 18:50:28.233462 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 20 18:50:28.236955 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:50:28.237190 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 20 18:50:28.241359 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 20 18:50:28.242685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:50:28.249939 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:50:28.252945 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:50:28.406372 kernel: hv_netvsc 7ced8d2d-01e4-7ced-8d2d-01e47ced8d2d eth0: VF slot 1 added Jun 20 18:50:28.416848 kernel: hv_vmbus: registering driver hv_pci Jun 20 18:50:28.416942 kernel: hv_pci dab0ab75-d423-48b7-a8aa-34ff36409df8: PCI VMBus probing: Using version 0x10004 Jun 20 18:50:28.421936 kernel: hv_pci dab0ab75-d423-48b7-a8aa-34ff36409df8: PCI host bridge to bus d423:00 Jun 20 18:50:28.422137 kernel: pci_bus d423:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 20 18:50:28.424934 kernel: pci_bus d423:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 18:50:28.427931 kernel: pci d423:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 20 18:50:28.434964 kernel: pci d423:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 20 18:50:28.438950 kernel: pci d423:00:02.0: enabling Extended Tags Jun 20 18:50:28.450023 kernel: pci d423:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d423:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 20 18:50:28.456737 kernel: pci_bus d423:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 18:50:28.457084 kernel: pci d423:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 20 18:50:28.624350 kernel: mlx5_core d423:00:02.0: enabling device (0000 -> 0002) Jun 20 18:50:28.624643 kernel: mlx5_core d423:00:02.0: firmware version: 14.30.5000 Jun 20 18:50:28.773193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 20 18:50:28.802941 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (458) Jun 20 18:50:28.823082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:50:28.837940 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (453) Jun 20 18:50:28.863876 kernel: hv_netvsc 7ced8d2d-01e4-7ced-8d2d-01e47ced8d2d eth0: VF registering: eth1 Jun 20 18:50:28.875354 kernel: mlx5_core d423:00:02.0 eth1: joined to eth0 Jun 20 18:50:28.875599 kernel: mlx5_core d423:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 20 18:50:28.863791 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 20 18:50:28.867065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 20 18:50:28.890938 kernel: mlx5_core d423:00:02.0 enP54307s1: renamed from eth1 Jun 20 18:50:28.898110 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:50:28.937989 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 20 18:50:29.918219 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:50:29.919968 disk-uuid[599]: The operation has completed successfully. Jun 20 18:50:30.005152 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:50:30.005285 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:50:30.051076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:50:30.059986 sh[688]: Success Jun 20 18:50:30.089950 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 20 18:50:30.303436 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:50:30.322047 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:50:30.326542 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:50:30.345870 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 18:50:30.345956 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:50:30.349047 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:50:30.351905 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:50:30.354596 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:50:30.663234 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:50:30.666276 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:50:30.678191 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:50:30.686128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:50:30.704829 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:50:30.704911 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:50:30.707146 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:50:30.749204 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:50:30.755949 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:50:30.760911 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:50:30.772183 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:50:30.776892 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:50:30.783082 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:50:30.809803 systemd-networkd[869]: lo: Link UP Jun 20 18:50:30.809813 systemd-networkd[869]: lo: Gained carrier Jun 20 18:50:30.812200 systemd-networkd[869]: Enumeration completed Jun 20 18:50:30.812426 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:50:30.814313 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:50:30.814317 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:50:30.815988 systemd[1]: Reached target network.target - Network. Jun 20 18:50:30.880948 kernel: mlx5_core d423:00:02.0 enP54307s1: Link up Jun 20 18:50:30.909953 kernel: hv_netvsc 7ced8d2d-01e4-7ced-8d2d-01e47ced8d2d eth0: Data path switched to VF: enP54307s1 Jun 20 18:50:30.911265 systemd-networkd[869]: enP54307s1: Link UP Jun 20 18:50:30.911427 systemd-networkd[869]: eth0: Link UP Jun 20 18:50:30.911592 systemd-networkd[869]: eth0: Gained carrier Jun 20 18:50:30.911604 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:50:30.916147 systemd-networkd[869]: enP54307s1: Gained carrier Jun 20 18:50:30.987999 systemd-networkd[869]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 20 18:50:32.037732 ignition[864]: Ignition 2.20.0 Jun 20 18:50:32.037745 ignition[864]: Stage: fetch-offline Jun 20 18:50:32.039834 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:50:32.037789 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:50:32.037800 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:50:32.037912 ignition[864]: parsed url from cmdline: "" Jun 20 18:50:32.037928 ignition[864]: no config URL provided Jun 20 18:50:32.037939 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:50:32.037951 ignition[864]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:50:32.056056 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:50:32.037958 ignition[864]: failed to fetch config: resource requires networking Jun 20 18:50:32.038169 ignition[864]: Ignition finished successfully Jun 20 18:50:32.066806 ignition[878]: Ignition 2.20.0 Jun 20 18:50:32.066815 ignition[878]: Stage: fetch Jun 20 18:50:32.067063 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:50:32.067073 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:50:32.067197 ignition[878]: parsed url from cmdline: "" Jun 20 18:50:32.067202 ignition[878]: no config URL provided Jun 20 18:50:32.067208 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:50:32.067216 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:50:32.067251 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 18:50:32.171580 ignition[878]: GET result: OK Jun 20 18:50:32.171695 ignition[878]: config has been read from IMDS userdata Jun 20 18:50:32.171734 ignition[878]: parsing config with SHA512: a0b2955eb71c754fa49739f55fc0eff1779ba03ef0b3e059c6abbdb84840861f83d22f1cb87f135c2247cd610c30b5cbbd934d72c748dc90e6484c8dbdefd1e4 Jun 20 18:50:32.180850 unknown[878]: fetched base config from "system" Jun 20 18:50:32.181051 unknown[878]: fetched base config from "system" Jun 20 18:50:32.181056 unknown[878]: fetched user config from "azure" Jun 20 18:50:32.186959 ignition[878]: fetch: fetch complete Jun 20 18:50:32.186969 ignition[878]: fetch: fetch passed Jun 20 18:50:32.188630 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:50:32.187043 ignition[878]: Ignition finished successfully Jun 20 18:50:32.203156 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:50:32.220605 ignition[884]: Ignition 2.20.0 Jun 20 18:50:32.220617 ignition[884]: Stage: kargs Jun 20 18:50:32.222786 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:50:32.220840 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:50:32.220854 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:50:32.221725 ignition[884]: kargs: kargs passed Jun 20 18:50:32.221773 ignition[884]: Ignition finished successfully Jun 20 18:50:32.235135 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:50:32.250108 ignition[890]: Ignition 2.20.0 Jun 20 18:50:32.250119 ignition[890]: Stage: disks Jun 20 18:50:32.252088 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:50:32.250325 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:50:32.256346 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:50:32.250340 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:50:32.260790 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:50:32.251207 ignition[890]: disks: disks passed Jun 20 18:50:32.265716 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:50:32.251252 ignition[890]: Ignition finished successfully Jun 20 18:50:32.268116 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:50:32.268256 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:50:32.281137 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:50:32.297009 systemd-networkd[869]: enP54307s1: Gained IPv6LL Jun 20 18:50:32.337103 systemd-fsck[898]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 20 18:50:32.343307 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:50:32.358055 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:50:32.448946 kernel: EXT4-fs (sda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 18:50:32.449264 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:50:32.453903 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:50:32.516038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:50:32.521589 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:50:32.532043 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (909) Jun 20 18:50:32.535551 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:50:32.538126 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:50:32.544374 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:50:32.544413 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:50:32.549097 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:50:32.549178 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:50:32.549223 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:50:32.559478 systemd-networkd[869]: eth0: Gained IPv6LL Jun 20 18:50:32.562780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:50:32.564936 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:50:32.578079 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:50:33.257812 initrd-setup-root[935]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:50:33.284946 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:50:33.292541 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:50:33.298867 coreos-metadata[911]: Jun 20 18:50:33.298 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:50:33.302455 coreos-metadata[911]: Jun 20 18:50:33.302 INFO Fetch successful Jun 20 18:50:33.302455 coreos-metadata[911]: Jun 20 18:50:33.302 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:50:33.311936 coreos-metadata[911]: Jun 20 18:50:33.311 INFO Fetch successful Jun 20 18:50:33.314171 coreos-metadata[911]: Jun 20 18:50:33.313 INFO wrote hostname ci-4230.2.0-a-e7ad40a4c3 to /sysroot/etc/hostname Jun 20 18:50:33.313865 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:50:33.352338 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:50:34.157818 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:50:34.166052 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:50:34.171080 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:50:34.185966 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:50:34.191597 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:50:34.212663 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:50:34.223261 ignition[1029]: INFO : Ignition 2.20.0 Jun 20 18:50:34.223261 ignition[1029]: INFO : Stage: mount Jun 20 18:50:34.229397 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:50:34.229397 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:50:34.229397 ignition[1029]: INFO : mount: mount passed Jun 20 18:50:34.229397 ignition[1029]: INFO : Ignition finished successfully Jun 20 18:50:34.225219 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:50:34.236075 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:50:34.249377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:50:34.268941 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1040) Jun 20 18:50:34.268986 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:50:34.271932 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:50:34.276422 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:50:34.282941 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:50:34.284687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:50:34.306604 ignition[1057]: INFO : Ignition 2.20.0 Jun 20 18:50:34.306604 ignition[1057]: INFO : Stage: files Jun 20 18:50:34.310409 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:50:34.310409 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:50:34.310409 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:50:34.336889 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:50:34.341116 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:50:34.419568 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:50:34.423332 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:50:34.426820 unknown[1057]: wrote ssh authorized keys file for user: core Jun 20 18:50:34.429424 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:50:34.445383 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 18:50:34.451678 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 18:50:34.520008 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:50:34.801981 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 18:50:34.801981 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:50:34.811901 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 18:50:35.394756 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:50:35.660819 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:50:35.660819 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:50:35.669833 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 18:50:36.296377 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:50:37.285395 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 18:50:37.285395 ignition[1057]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:50:37.317646 ignition[1057]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:50:37.322310 ignition[1057]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:50:37.322310 ignition[1057]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:50:37.322310 ignition[1057]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:50:37.322310 ignition[1057]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:50:37.322310 ignition[1057]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:50:37.322310 ignition[1057]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:50:37.322310 ignition[1057]: INFO : files: files passed Jun 20 18:50:37.322310 ignition[1057]: INFO : Ignition finished successfully Jun 20 18:50:37.320146 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:50:37.337177 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:50:37.359114 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:50:37.365546 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:50:37.367610 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:50:37.474361 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:50:37.474361 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:50:37.487606 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:50:37.478220 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:50:37.479369 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:50:37.484132 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:50:37.512426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:50:37.512550 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:50:37.520661 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:50:37.525636 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:50:37.525795 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:50:37.534037 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:50:37.544856 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:50:37.556167 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:50:37.568799 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:50:37.574497 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:50:37.577775 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:50:37.584402 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:50:37.584566 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:50:37.592626 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:50:37.597363 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:50:37.601544 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:50:37.604158 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:50:37.609816 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:50:37.617863 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:50:37.618035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:50:37.618435 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:50:37.618827 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:50:37.619236 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:50:37.619606 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:50:37.619745 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:50:37.620908 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:50:37.621337 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:50:37.621749 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:50:37.639087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:50:37.644830 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:50:37.645016 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:50:37.649863 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:50:37.650041 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:50:37.654239 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:50:37.658288 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:50:37.685457 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:50:37.685604 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:50:37.699315 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:50:37.701441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:50:37.703983 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:50:37.708133 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:50:37.715961 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:50:37.716307 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:50:37.720042 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:50:37.720219 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:50:37.731603 ignition[1109]: INFO : Ignition 2.20.0 Jun 20 18:50:37.731603 ignition[1109]: INFO : Stage: umount Jun 20 18:50:37.731603 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:50:37.731603 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:50:37.740020 ignition[1109]: INFO : umount: umount passed Jun 20 18:50:37.740020 ignition[1109]: INFO : Ignition finished successfully Jun 20 18:50:37.745567 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:50:37.745697 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:50:37.751660 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:50:37.751905 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:50:37.754661 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:50:37.754721 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:50:37.758960 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:50:37.759010 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:50:37.761254 systemd[1]: Stopped target network.target - Network. Jun 20 18:50:37.774807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:50:37.774867 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:50:37.779906 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:50:37.784256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:50:37.784370 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:50:37.788957 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:50:37.789146 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:50:37.802866 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:50:37.802937 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:50:37.807191 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:50:37.807233 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:50:37.807328 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:50:37.807377 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:50:37.822122 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:50:37.822190 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:50:37.826870 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:50:37.829334 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:50:37.832038 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:50:37.834658 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:50:37.842579 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:50:37.842693 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:50:37.856239 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:50:37.856495 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:50:37.856597 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:50:37.863011 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:50:37.865945 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:50:37.866004 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:50:37.877044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:50:37.879138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:50:37.879202 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:50:37.879330 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:50:37.879372 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:50:37.884347 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:50:37.884392 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:50:37.888712 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:50:37.888766 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:50:37.891640 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:50:37.899273 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:50:37.899338 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:50:37.930561 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:50:37.930735 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:50:37.935771 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:50:37.935816 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:50:37.940996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:50:37.941040 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:50:37.952323 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:50:37.952399 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:50:37.957254 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:50:37.957305 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:50:37.968094 kernel: hv_netvsc 7ced8d2d-01e4-7ced-8d2d-01e47ced8d2d eth0: Data path switched from VF: enP54307s1 Jun 20 18:50:37.968672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:50:37.968746 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:50:37.981081 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:50:37.983759 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:50:37.983836 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:50:37.991889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:50:37.992032 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:50:37.998866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:50:37.998946 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:50:37.999302 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:50:37.999406 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:50:38.002516 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:50:38.002607 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:50:38.158667 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:50:38.179690 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:50:38.179819 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:50:38.184851 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:50:38.189581 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:50:38.189652 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:50:38.202130 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:50:38.211490 systemd[1]: Switching root. Jun 20 18:50:38.282805 systemd-journald[177]: Journal stopped Jun 20 18:50:42.961897 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jun 20 18:50:42.961942 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:50:42.961956 kernel: SELinux: policy capability open_perms=1 Jun 20 18:50:42.961968 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:50:42.961977 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:50:42.961988 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:50:42.961998 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:50:42.962012 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:50:42.962021 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:50:42.962032 kernel: audit: type=1403 audit(1750445439.477:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:50:42.962043 systemd[1]: Successfully loaded SELinux policy in 172.711ms. Jun 20 18:50:42.962057 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.522ms. Jun 20 18:50:42.962071 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:50:42.962082 systemd[1]: Detected virtualization microsoft. Jun 20 18:50:42.962096 systemd[1]: Detected architecture x86-64. Jun 20 18:50:42.962109 systemd[1]: Detected first boot. Jun 20 18:50:42.962120 systemd[1]: Hostname set to . Jun 20 18:50:42.962132 systemd[1]: Initializing machine ID from random generator. Jun 20 18:50:42.962143 zram_generator::config[1153]: No configuration found. Jun 20 18:50:42.962158 kernel: Guest personality initialized and is inactive Jun 20 18:50:42.962170 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 18:50:42.962179 kernel: Initialized host personality Jun 20 18:50:42.962191 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:50:42.962201 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:50:42.962214 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:50:42.962226 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:50:42.962237 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:50:42.962251 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:50:42.962263 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:50:42.962274 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:50:42.962287 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:50:42.962297 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:50:42.962312 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:50:42.962323 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:50:42.962338 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:50:42.962351 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:50:42.962361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:50:42.962374 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:50:42.962385 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:50:42.962397 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:50:42.962414 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:50:42.962425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:50:42.962437 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:50:42.962453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:50:42.962464 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:50:42.962477 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:50:42.962490 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:50:42.962501 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:50:42.962514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:50:42.962525 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:50:42.962539 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:50:42.962552 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:50:42.962566 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:50:42.962578 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:50:42.962590 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:50:42.962604 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:50:42.962618 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:50:42.962630 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:50:42.962640 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:50:42.962652 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:50:42.962664 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:50:42.962678 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:50:42.962694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:50:42.962716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:50:42.962739 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:50:42.962759 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:50:42.962783 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:50:42.962802 systemd[1]: Reached target machines.target - Containers. Jun 20 18:50:42.962824 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:50:42.962845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:50:42.962870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:50:42.962897 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:50:42.962940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:50:42.962962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:50:42.962985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:50:42.963005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:50:42.963027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:50:42.963051 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:50:42.963074 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:50:42.963097 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:50:42.963118 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:50:42.963143 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:50:42.963165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:50:42.963190 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:50:42.963214 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:50:42.963237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:50:42.963259 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:50:42.963316 systemd-journald[1261]: Collecting audit messages is disabled. Jun 20 18:50:42.963358 kernel: fuse: init (API version 7.39) Jun 20 18:50:42.963381 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:50:42.963403 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:50:42.963436 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:50:42.963466 systemd[1]: Stopped verity-setup.service. Jun 20 18:50:42.963487 kernel: ACPI: bus type drm_connector registered Jun 20 18:50:42.963509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:50:42.963536 systemd-journald[1261]: Journal started Jun 20 18:50:42.963571 systemd-journald[1261]: Runtime Journal (/run/log/journal/19a418a23da34ae0ad2f314d2c4e286d) is 8M, max 158.8M, 150.8M free. Jun 20 18:50:42.354603 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:50:42.362866 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:50:42.363275 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:50:42.972939 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:50:42.976379 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:50:42.979160 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:50:42.981850 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:50:42.984197 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:50:42.986933 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:50:42.989651 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:50:42.992328 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:50:42.995306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:50:42.998424 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:50:42.998616 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:50:43.001684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:50:43.001878 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:50:43.005196 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:50:43.005436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:50:43.008459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:50:43.008657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:50:43.012405 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:50:43.014211 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:50:43.018658 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:50:43.021684 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:50:43.025140 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:50:43.028618 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:50:43.046961 kernel: loop: module loaded Jun 20 18:50:43.047729 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:50:43.058442 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:50:43.062890 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:50:43.065898 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:50:43.066057 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:50:43.070243 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:50:43.079030 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:50:43.085664 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:50:43.088562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:50:43.112130 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:50:43.121129 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:50:43.123881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:50:43.125765 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:50:43.137901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:50:43.142703 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:50:43.150099 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:50:43.155328 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:50:43.156210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:50:43.169892 systemd-journald[1261]: Time spent on flushing to /var/log/journal/19a418a23da34ae0ad2f314d2c4e286d is 28.261ms for 969 entries. Jun 20 18:50:43.169892 systemd-journald[1261]: System Journal (/var/log/journal/19a418a23da34ae0ad2f314d2c4e286d) is 8M, max 2.6G, 2.6G free. Jun 20 18:50:43.228472 systemd-journald[1261]: Received client request to flush runtime journal. Jun 20 18:50:43.228536 kernel: loop0: detected capacity change from 0 to 147912 Jun 20 18:50:43.165113 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:50:43.168756 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:50:43.174184 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:50:43.177632 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:50:43.184235 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:50:43.191089 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:50:43.203748 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 18:50:43.212656 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:50:43.216668 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:50:43.230100 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:50:43.234994 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:50:43.271663 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:50:43.293348 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:50:43.350780 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:50:43.361135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:50:43.365503 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:50:43.416339 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Jun 20 18:50:43.416363 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Jun 20 18:50:43.421378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:50:43.615946 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:50:43.646957 kernel: loop1: detected capacity change from 0 to 28272 Jun 20 18:50:44.017963 kernel: loop2: detected capacity change from 0 to 221472 Jun 20 18:50:44.058964 kernel: loop3: detected capacity change from 0 to 138176 Jun 20 18:50:44.317542 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:50:44.326155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:50:44.351371 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Jun 20 18:50:44.598339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:50:44.610118 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:50:44.616963 kernel: loop4: detected capacity change from 0 to 147912 Jun 20 18:50:44.642958 kernel: loop5: detected capacity change from 0 to 28272 Jun 20 18:50:44.656949 kernel: loop6: detected capacity change from 0 to 221472 Jun 20 18:50:44.689964 kernel: loop7: detected capacity change from 0 to 138176 Jun 20 18:50:44.708470 (sd-merge)[1331]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 18:50:44.709157 (sd-merge)[1331]: Merged extensions into '/usr'. Jun 20 18:50:44.719706 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:50:44.721209 systemd[1]: Reload requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:50:44.721224 systemd[1]: Reloading... Jun 20 18:50:44.827718 kernel: hv_vmbus: registering driver hv_balloon Jun 20 18:50:44.832953 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 18:50:44.833029 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 18:50:44.840959 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 18:50:44.850016 zram_generator::config[1374]: No configuration found. Jun 20 18:50:44.860113 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 18:50:44.869912 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:50:44.878630 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:50:44.879969 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:50:45.281978 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1344) Jun 20 18:50:45.337344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:50:45.460187 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 18:50:45.536648 systemd[1]: Reloading finished in 814 ms. Jun 20 18:50:45.552471 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:50:45.573436 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:50:45.600394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:50:45.617107 systemd[1]: Starting ensure-sysext.service... Jun 20 18:50:45.622105 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:50:45.627098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:50:45.629344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:50:45.640063 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:50:45.645108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:50:45.685020 systemd[1]: Reload requested from client PID 1509 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:50:45.685041 systemd[1]: Reloading... Jun 20 18:50:45.691670 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:50:45.696641 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:50:45.701519 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:50:45.702879 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jun 20 18:50:45.708269 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jun 20 18:50:45.715200 lvm[1510]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:50:45.750395 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:50:45.752647 systemd-tmpfiles[1512]: Skipping /boot Jun 20 18:50:45.803037 zram_generator::config[1553]: No configuration found. Jun 20 18:50:45.805080 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:50:45.806104 systemd-tmpfiles[1512]: Skipping /boot Jun 20 18:50:45.963217 systemd-networkd[1330]: lo: Link UP Jun 20 18:50:45.963625 systemd-networkd[1330]: lo: Gained carrier Jun 20 18:50:45.967632 systemd-networkd[1330]: Enumeration completed Jun 20 18:50:45.968156 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:50:45.968255 systemd-networkd[1330]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:50:45.989896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:50:46.018950 kernel: mlx5_core d423:00:02.0 enP54307s1: Link up Jun 20 18:50:46.044577 kernel: hv_netvsc 7ced8d2d-01e4-7ced-8d2d-01e47ced8d2d eth0: Data path switched to VF: enP54307s1 Jun 20 18:50:46.044173 systemd-networkd[1330]: enP54307s1: Link UP Jun 20 18:50:46.044317 systemd-networkd[1330]: eth0: Link UP Jun 20 18:50:46.044322 systemd-networkd[1330]: eth0: Gained carrier Jun 20 18:50:46.044351 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:50:46.048300 systemd-networkd[1330]: enP54307s1: Gained carrier Jun 20 18:50:46.078000 systemd-networkd[1330]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 20 18:50:46.172397 systemd[1]: Reloading finished in 485 ms. Jun 20 18:50:46.184848 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:50:46.187951 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:50:46.207876 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:50:46.211362 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:50:46.214717 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:50:46.218670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:50:46.229180 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:50:46.239242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:50:46.266268 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:50:46.270220 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:50:46.279476 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:50:46.286525 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:50:46.290008 lvm[1624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:50:46.298197 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:50:46.316256 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:50:46.327566 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:50:46.339892 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:50:46.356293 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:50:46.365840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:50:46.366224 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:50:46.378003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:50:46.390208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:50:46.396058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:50:46.402428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:50:46.402597 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:50:46.402751 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:50:46.415201 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:50:46.420195 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:50:46.424147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:50:46.424371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:50:46.431841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:50:46.432069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:50:46.435448 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:50:46.435673 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:50:46.452765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:50:46.453486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:50:46.454078 augenrules[1659]: No rules Jun 20 18:50:46.457190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:50:46.463013 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:50:46.468558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:50:46.475180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:50:46.478183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:50:46.478541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:50:46.479040 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:50:46.482147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:50:46.485675 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:50:46.486174 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:50:46.489418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:50:46.489840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:50:46.492126 systemd-resolved[1633]: Positive Trust Anchors: Jun 20 18:50:46.492142 systemd-resolved[1633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:50:46.492181 systemd-resolved[1633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:50:46.493973 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:50:46.494173 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:50:46.497223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:50:46.497496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:50:46.501106 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:50:46.501299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:50:46.507084 systemd[1]: Finished ensure-sysext.service. Jun 20 18:50:46.513596 systemd-resolved[1633]: Using system hostname 'ci-4230.2.0-a-e7ad40a4c3'. Jun 20 18:50:46.515053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:50:46.515136 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:50:46.530061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:50:46.533211 systemd[1]: Reached target network.target - Network. Jun 20 18:50:46.535476 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:50:47.283810 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:50:47.287230 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:50:47.401182 systemd-networkd[1330]: enP54307s1: Gained IPv6LL Jun 20 18:50:47.593162 systemd-networkd[1330]: eth0: Gained IPv6LL Jun 20 18:50:47.597118 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:50:47.600813 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:50:49.206636 ldconfig[1290]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:50:49.217299 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:50:49.232129 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:50:49.243551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:50:49.246673 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:50:49.249188 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:50:49.252121 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:50:49.255215 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:50:49.257757 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:50:49.260598 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:50:49.263458 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:50:49.263506 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:50:49.265716 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:50:49.268999 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:50:49.272999 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:50:49.278325 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:50:49.281515 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:50:49.284583 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:50:49.298644 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:50:49.301720 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:50:49.305269 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:50:49.307817 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:50:49.309994 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:50:49.312185 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:50:49.312224 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:50:49.319056 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 18:50:49.324054 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:50:49.331091 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:50:49.337202 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:50:49.348052 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:50:49.359182 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:50:49.361956 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:50:49.362013 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jun 20 18:50:49.364949 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 18:50:49.367601 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 18:50:49.373060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:50:49.377482 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:50:49.391989 jq[1685]: false Jun 20 18:50:49.393144 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:50:49.401044 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:50:49.402168 KVP[1690]: KVP starting; pid is:1690 Jun 20 18:50:49.404518 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 18:50:49.407354 KVP[1690]: KVP LIC Version: 3.1 Jun 20 18:50:49.408076 kernel: hv_utils: KVP IC version 4.0 Jun 20 18:50:49.410601 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:50:49.419562 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:50:49.434138 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:50:49.435484 chronyd[1702]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 18:50:49.439345 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:50:49.440320 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:50:49.441861 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:50:49.454029 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:50:49.461196 jq[1706]: true Jun 20 18:50:49.463806 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:50:49.464707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:50:49.478221 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:50:49.478530 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:50:49.497041 chronyd[1702]: Timezone right/UTC failed leap second check, ignoring Jun 20 18:50:49.497245 chronyd[1702]: Loaded seccomp filter (level 2) Jun 20 18:50:49.498955 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 18:50:49.502467 extend-filesystems[1687]: Found loop4 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found loop5 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found loop6 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found loop7 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda1 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda2 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda3 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found usr Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda4 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda6 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda7 Jun 20 18:50:49.502467 extend-filesystems[1687]: Found sda9 Jun 20 18:50:49.527620 extend-filesystems[1687]: Checking size of /dev/sda9 Jun 20 18:50:49.543006 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:50:49.554530 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:50:49.554882 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:50:49.565914 dbus-daemon[1684]: [system] SELinux support is enabled Jun 20 18:50:49.566115 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:50:49.567429 jq[1711]: true Jun 20 18:50:49.573753 update_engine[1703]: I20250620 18:50:49.573376 1703 main.cc:92] Flatcar Update Engine starting Jun 20 18:50:49.586996 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:50:49.588705 update_engine[1703]: I20250620 18:50:49.588435 1703 update_check_scheduler.cc:74] Next update check in 6m4s Jun 20 18:50:49.594520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:50:49.594569 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:50:49.600190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:50:49.600230 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:50:49.607691 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:50:49.618108 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:50:49.619103 systemd-logind[1699]: New seat seat0. Jun 20 18:50:49.623576 systemd-logind[1699]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:50:49.624039 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:50:49.628506 extend-filesystems[1687]: Old size kept for /dev/sda9 Jun 20 18:50:49.632074 extend-filesystems[1687]: Found sr0 Jun 20 18:50:49.633957 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:50:49.634278 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:50:49.665709 tar[1710]: linux-amd64/helm Jun 20 18:50:49.701359 coreos-metadata[1683]: Jun 20 18:50:49.701 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:50:49.712180 coreos-metadata[1683]: Jun 20 18:50:49.712 INFO Fetch successful Jun 20 18:50:49.712382 coreos-metadata[1683]: Jun 20 18:50:49.712 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 18:50:49.718942 coreos-metadata[1683]: Jun 20 18:50:49.718 INFO Fetch successful Jun 20 18:50:49.721970 coreos-metadata[1683]: Jun 20 18:50:49.721 INFO Fetching http://168.63.129.16/machine/609215f5-4781-454d-923e-a6d7732746ba/657138d9%2Df181%2D4fa9%2D83ad%2D6d60f8ca10da.%5Fci%2D4230.2.0%2Da%2De7ad40a4c3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 18:50:49.726891 coreos-metadata[1683]: Jun 20 18:50:49.726 INFO Fetch successful Jun 20 18:50:49.726891 coreos-metadata[1683]: Jun 20 18:50:49.726 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:50:49.738938 coreos-metadata[1683]: Jun 20 18:50:49.738 INFO Fetch successful Jun 20 18:50:49.786303 bash[1769]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:50:49.789434 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:50:49.798891 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 18:50:49.823011 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:50:49.830681 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:50:49.858961 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1752) Jun 20 18:50:50.059529 locksmithd[1739]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:50:50.341982 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:50:50.420022 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:50:50.432160 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:50:50.441557 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 18:50:50.464247 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:50:50.464489 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:50:50.483765 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:50:50.500110 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 18:50:50.517558 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:50:50.529188 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:50:50.538570 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:50:50.543434 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:50:50.637376 tar[1710]: linux-amd64/LICENSE Jun 20 18:50:50.637811 tar[1710]: linux-amd64/README.md Jun 20 18:50:50.651577 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:50:50.729383 containerd[1726]: time="2025-06-20T18:50:50.729288800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:50:50.766220 containerd[1726]: time="2025-06-20T18:50:50.765454500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768179300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768215200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768237600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768412300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768432600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768506700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768522800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768786700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768810600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768830700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769516 containerd[1726]: time="2025-06-20T18:50:50.768844500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769983 containerd[1726]: time="2025-06-20T18:50:50.769005400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769983 containerd[1726]: time="2025-06-20T18:50:50.769286200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769983 containerd[1726]: time="2025-06-20T18:50:50.769499500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:50:50.769983 containerd[1726]: time="2025-06-20T18:50:50.769540800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:50:50.769983 containerd[1726]: time="2025-06-20T18:50:50.769668100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:50:50.769983 containerd[1726]: time="2025-06-20T18:50:50.769728300Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791033900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791109000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791130600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791152900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791172400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791350300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791658200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791775200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791794700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791813800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791833700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791852200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791868600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792115 containerd[1726]: time="2025-06-20T18:50:50.791888400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.791915000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.791954700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.791972300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.791989600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792015800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792034400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792051700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792071600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792087200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792104400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792118900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792139100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792164700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.792663 containerd[1726]: time="2025-06-20T18:50:50.792185700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792202600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792217900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792232900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792250600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792277700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792296500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792312400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792375700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792400400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792416300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792436000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792450300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792467800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:50:50.793186 containerd[1726]: time="2025-06-20T18:50:50.792483000Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:50:50.793651 containerd[1726]: time="2025-06-20T18:50:50.792498400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:50:50.793692 containerd[1726]: time="2025-06-20T18:50:50.792873600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:50:50.793692 containerd[1726]: time="2025-06-20T18:50:50.793199300Z" level=info msg="Connect containerd service" Jun 20 18:50:50.793692 containerd[1726]: time="2025-06-20T18:50:50.793303000Z" level=info msg="using legacy CRI server" Jun 20 18:50:50.793692 containerd[1726]: time="2025-06-20T18:50:50.793327300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:50:50.793692 containerd[1726]: time="2025-06-20T18:50:50.793496100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794202600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794598700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794656400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794724100Z" level=info msg="Start subscribing containerd event" Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794763900Z" level=info msg="Start recovering state" Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794831100Z" level=info msg="Start event monitor" Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794847400Z" level=info msg="Start snapshots syncer" Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794858000Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:50:50.796872 containerd[1726]: time="2025-06-20T18:50:50.794868800Z" level=info msg="Start streaming server" Jun 20 18:50:50.795687 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:50:50.798666 containerd[1726]: time="2025-06-20T18:50:50.798638200Z" level=info msg="containerd successfully booted in 0.070373s" Jun 20 18:50:51.157101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:50:51.160448 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:50:51.165135 systemd[1]: Startup finished in 957ms (firmware) + 25.753s (loader) + 1.001s (kernel) + 12.630s (initrd) + 11.858s (userspace) = 52.200s. Jun 20 18:50:51.170317 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:50:51.527019 login[1856]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jun 20 18:50:51.527485 login[1855]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 18:50:51.547861 systemd-logind[1699]: New session 2 of user core. Jun 20 18:50:51.549803 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:50:51.558232 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:50:51.576207 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:50:51.585222 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:50:51.593168 (systemd)[1880]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:50:51.596927 systemd-logind[1699]: New session c1 of user core. Jun 20 18:50:51.885324 systemd[1880]: Queued start job for default target default.target. Jun 20 18:50:51.890669 systemd[1880]: Created slice app.slice - User Application Slice. Jun 20 18:50:51.890698 systemd[1880]: Reached target paths.target - Paths. Jun 20 18:50:51.890751 systemd[1880]: Reached target timers.target - Timers. Jun 20 18:50:51.892392 systemd[1880]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:50:51.912683 kubelet[1869]: E0620 18:50:51.912585 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:50:51.912133 systemd[1880]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:50:51.913395 systemd[1880]: Reached target sockets.target - Sockets. Jun 20 18:50:51.913507 systemd[1880]: Reached target basic.target - Basic System. Jun 20 18:50:51.913554 systemd[1880]: Reached target default.target - Main User Target. Jun 20 18:50:51.913596 systemd[1880]: Startup finished in 307ms. Jun 20 18:50:51.914421 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:50:51.915262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:50:51.915702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:50:51.916075 systemd[1]: kubelet.service: Consumed 994ms CPU time, 266.7M memory peak. Jun 20 18:50:51.919512 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:50:52.503937 waagent[1853]: 2025-06-20T18:50:52.503829Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 20 18:50:52.506830 waagent[1853]: 2025-06-20T18:50:52.506762Z INFO Daemon Daemon OS: flatcar 4230.2.0 Jun 20 18:50:52.509138 waagent[1853]: 2025-06-20T18:50:52.509028Z INFO Daemon Daemon Python: 3.11.11 Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.509425Z INFO Daemon Daemon Run daemon Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.510070Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.0' Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.510691Z INFO Daemon Daemon Using waagent for provisioning Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.511651Z INFO Daemon Daemon Activate resource disk Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.512294Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.517838Z INFO Daemon Daemon Found device: None Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.518463Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.519219Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.520331Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:50:52.526993 waagent[1853]: 2025-06-20T18:50:52.521291Z INFO Daemon Daemon Running default provisioning handler Jun 20 18:50:52.548779 waagent[1853]: 2025-06-20T18:50:52.534103Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 18:50:52.548779 waagent[1853]: 2025-06-20T18:50:52.541076Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 18:50:52.548779 waagent[1853]: 2025-06-20T18:50:52.541282Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 18:50:52.548779 waagent[1853]: 2025-06-20T18:50:52.542895Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 18:50:52.539778 systemd-logind[1699]: New session 1 of user core. Jun 20 18:50:52.530715 login[1856]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 18:50:52.550634 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:50:52.660177 waagent[1853]: 2025-06-20T18:50:52.657180Z INFO Daemon Daemon Successfully mounted dvd Jun 20 18:50:52.669747 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 18:50:52.672304 waagent[1853]: 2025-06-20T18:50:52.672228Z INFO Daemon Daemon Detect protocol endpoint Jun 20 18:50:52.674778 waagent[1853]: 2025-06-20T18:50:52.674716Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:50:52.677600 waagent[1853]: 2025-06-20T18:50:52.677542Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 18:50:52.680600 waagent[1853]: 2025-06-20T18:50:52.680545Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 18:50:52.683235 waagent[1853]: 2025-06-20T18:50:52.683181Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 18:50:52.685453 waagent[1853]: 2025-06-20T18:50:52.685406Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 18:50:52.725045 waagent[1853]: 2025-06-20T18:50:52.724981Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 18:50:52.731710 waagent[1853]: 2025-06-20T18:50:52.725467Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 18:50:52.731710 waagent[1853]: 2025-06-20T18:50:52.726108Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 18:50:52.826296 waagent[1853]: 2025-06-20T18:50:52.826121Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 18:50:52.829779 waagent[1853]: 2025-06-20T18:50:52.829710Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 18:50:52.836757 waagent[1853]: 2025-06-20T18:50:52.836701Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:50:52.853803 waagent[1853]: 2025-06-20T18:50:52.853744Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 18:50:52.867132 waagent[1853]: 2025-06-20T18:50:52.854454Z INFO Daemon Jun 20 18:50:52.867132 waagent[1853]: 2025-06-20T18:50:52.855124Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4c364867-ee89-4d75-beb7-577fc2c54b8b eTag: 768962593309970129 source: Fabric] Jun 20 18:50:52.867132 waagent[1853]: 2025-06-20T18:50:52.856096Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 18:50:52.867132 waagent[1853]: 2025-06-20T18:50:52.857077Z INFO Daemon Jun 20 18:50:52.867132 waagent[1853]: 2025-06-20T18:50:52.857678Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:50:52.869810 waagent[1853]: 2025-06-20T18:50:52.869756Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 18:50:53.004137 waagent[1853]: 2025-06-20T18:50:53.004050Z INFO Daemon Downloaded certificate {'thumbprint': 'C205AC377493B6C880883708034E236925929EA5', 'hasPrivateKey': True} Jun 20 18:50:53.010906 waagent[1853]: 2025-06-20T18:50:53.004831Z INFO Daemon Fetch goal state completed Jun 20 18:50:53.012611 waagent[1853]: 2025-06-20T18:50:53.012559Z INFO Daemon Daemon Starting provisioning Jun 20 18:50:53.018635 waagent[1853]: 2025-06-20T18:50:53.012780Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 18:50:53.018635 waagent[1853]: 2025-06-20T18:50:53.013722Z INFO Daemon Daemon Set hostname [ci-4230.2.0-a-e7ad40a4c3] Jun 20 18:50:53.042824 waagent[1853]: 2025-06-20T18:50:53.042728Z INFO Daemon Daemon Publish hostname [ci-4230.2.0-a-e7ad40a4c3] Jun 20 18:50:53.049864 waagent[1853]: 2025-06-20T18:50:53.043302Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 18:50:53.049864 waagent[1853]: 2025-06-20T18:50:53.044058Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 18:50:53.054542 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:50:53.054553 systemd-networkd[1330]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:50:53.054607 systemd-networkd[1330]: eth0: DHCP lease lost Jun 20 18:50:53.055956 waagent[1853]: 2025-06-20T18:50:53.055825Z INFO Daemon Daemon Create user account if not exists Jun 20 18:50:53.060143 waagent[1853]: 2025-06-20T18:50:53.056232Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 18:50:53.060143 waagent[1853]: 2025-06-20T18:50:53.056856Z INFO Daemon Daemon Configure sudoer Jun 20 18:50:53.060143 waagent[1853]: 2025-06-20T18:50:53.058080Z INFO Daemon Daemon Configure sshd Jun 20 18:50:53.060143 waagent[1853]: 2025-06-20T18:50:53.059118Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 18:50:53.060143 waagent[1853]: 2025-06-20T18:50:53.059691Z INFO Daemon Daemon Deploy ssh public key. Jun 20 18:50:53.107980 systemd-networkd[1330]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 20 18:50:54.195094 waagent[1853]: 2025-06-20T18:50:54.195016Z INFO Daemon Daemon Provisioning complete Jun 20 18:50:54.206445 waagent[1853]: 2025-06-20T18:50:54.206385Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 18:50:54.212775 waagent[1853]: 2025-06-20T18:50:54.206696Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 18:50:54.212775 waagent[1853]: 2025-06-20T18:50:54.207514Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 20 18:50:54.332133 waagent[1932]: 2025-06-20T18:50:54.332031Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 20 18:50:54.332584 waagent[1932]: 2025-06-20T18:50:54.332201Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.0 Jun 20 18:50:54.332584 waagent[1932]: 2025-06-20T18:50:54.332283Z INFO ExtHandler ExtHandler Python: 3.11.11 Jun 20 18:50:54.367810 waagent[1932]: 2025-06-20T18:50:54.367711Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 20 18:50:54.368073 waagent[1932]: 2025-06-20T18:50:54.368020Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:50:54.368170 waagent[1932]: 2025-06-20T18:50:54.368130Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:50:54.375674 waagent[1932]: 2025-06-20T18:50:54.375609Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:50:54.381495 waagent[1932]: 2025-06-20T18:50:54.381440Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 18:50:54.381978 waagent[1932]: 2025-06-20T18:50:54.381905Z INFO ExtHandler Jun 20 18:50:54.382126 waagent[1932]: 2025-06-20T18:50:54.382029Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 687a19b2-eda9-49cc-b857-167041cb1c6d eTag: 768962593309970129 source: Fabric] Jun 20 18:50:54.382421 waagent[1932]: 2025-06-20T18:50:54.382368Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 18:50:54.382991 waagent[1932]: 2025-06-20T18:50:54.382934Z INFO ExtHandler Jun 20 18:50:54.383063 waagent[1932]: 2025-06-20T18:50:54.383022Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:50:54.397179 waagent[1932]: 2025-06-20T18:50:54.397135Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 18:50:54.468472 waagent[1932]: 2025-06-20T18:50:54.468332Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C205AC377493B6C880883708034E236925929EA5', 'hasPrivateKey': True} Jun 20 18:50:54.468974 waagent[1932]: 2025-06-20T18:50:54.468889Z INFO ExtHandler Fetch goal state completed Jun 20 18:50:54.482065 waagent[1932]: 2025-06-20T18:50:54.481998Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1932 Jun 20 18:50:54.482210 waagent[1932]: 2025-06-20T18:50:54.482160Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 18:50:54.483764 waagent[1932]: 2025-06-20T18:50:54.483704Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 18:50:54.484143 waagent[1932]: 2025-06-20T18:50:54.484094Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 18:50:54.500775 waagent[1932]: 2025-06-20T18:50:54.500730Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 18:50:54.500996 waagent[1932]: 2025-06-20T18:50:54.500952Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 18:50:54.507462 waagent[1932]: 2025-06-20T18:50:54.507182Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 18:50:54.514018 systemd[1]: Reload requested from client PID 1945 ('systemctl') (unit waagent.service)... Jun 20 18:50:54.514035 systemd[1]: Reloading... Jun 20 18:50:54.603987 zram_generator::config[1980]: No configuration found. Jun 20 18:50:54.740424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:50:54.868695 systemd[1]: Reloading finished in 354 ms. Jun 20 18:50:54.887933 waagent[1932]: 2025-06-20T18:50:54.886159Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 20 18:50:54.895843 systemd[1]: Reload requested from client PID 2041 ('systemctl') (unit waagent.service)... Jun 20 18:50:54.895860 systemd[1]: Reloading... Jun 20 18:50:54.991949 zram_generator::config[2083]: No configuration found. Jun 20 18:50:55.120113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:50:55.238947 systemd[1]: Reloading finished in 342 ms. Jun 20 18:50:55.260999 waagent[1932]: 2025-06-20T18:50:55.259191Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 18:50:55.260999 waagent[1932]: 2025-06-20T18:50:55.259396Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 18:50:56.326247 waagent[1932]: 2025-06-20T18:50:56.326133Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 18:50:56.327077 waagent[1932]: 2025-06-20T18:50:56.326993Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 20 18:50:56.328045 waagent[1932]: 2025-06-20T18:50:56.327946Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 18:50:56.328179 waagent[1932]: 2025-06-20T18:50:56.328118Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:50:56.328568 waagent[1932]: 2025-06-20T18:50:56.328506Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:50:56.328872 waagent[1932]: 2025-06-20T18:50:56.328799Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 18:50:56.329419 waagent[1932]: 2025-06-20T18:50:56.329351Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 18:50:56.329518 waagent[1932]: 2025-06-20T18:50:56.329472Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:50:56.329657 waagent[1932]: 2025-06-20T18:50:56.329585Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:50:56.330016 waagent[1932]: 2025-06-20T18:50:56.329953Z INFO EnvHandler ExtHandler Configure routes Jun 20 18:50:56.330310 waagent[1932]: 2025-06-20T18:50:56.330243Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 18:50:56.330462 waagent[1932]: 2025-06-20T18:50:56.330400Z INFO EnvHandler ExtHandler Gateway:None Jun 20 18:50:56.330738 waagent[1932]: 2025-06-20T18:50:56.330660Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 18:50:56.330738 waagent[1932]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 18:50:56.330738 waagent[1932]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 18:50:56.330738 waagent[1932]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 18:50:56.330738 waagent[1932]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:50:56.330738 waagent[1932]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:50:56.330738 waagent[1932]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:50:56.331156 waagent[1932]: 2025-06-20T18:50:56.330769Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 18:50:56.331809 waagent[1932]: 2025-06-20T18:50:56.331740Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 18:50:56.331894 waagent[1932]: 2025-06-20T18:50:56.331828Z INFO EnvHandler ExtHandler Routes:None Jun 20 18:50:56.332111 waagent[1932]: 2025-06-20T18:50:56.332050Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 18:50:56.332727 waagent[1932]: 2025-06-20T18:50:56.332673Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 18:50:56.341071 waagent[1932]: 2025-06-20T18:50:56.341003Z INFO ExtHandler ExtHandler Jun 20 18:50:56.341146 waagent[1932]: 2025-06-20T18:50:56.341111Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 336f5a17-120b-422b-a600-d260b5da4506 correlation f3a91ad5-b020-4653-bbb9-7ef8e708de88 created: 2025-06-20T18:49:47.190544Z] Jun 20 18:50:56.344266 waagent[1932]: 2025-06-20T18:50:56.344210Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 18:50:56.345107 waagent[1932]: 2025-06-20T18:50:56.345049Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Jun 20 18:50:56.381103 waagent[1932]: 2025-06-20T18:50:56.380953Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 73E64CCB-46D9-472F-9E3E-09DC280B8FF6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 20 18:50:56.394179 waagent[1932]: 2025-06-20T18:50:56.394109Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 18:50:56.394179 waagent[1932]: Executing ['ip', '-a', '-o', 'link']: Jun 20 18:50:56.394179 waagent[1932]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 18:50:56.394179 waagent[1932]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:01:e4 brd ff:ff:ff:ff:ff:ff Jun 20 18:50:56.394179 waagent[1932]: 3: enP54307s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:01:e4 brd ff:ff:ff:ff:ff:ff\ altname enP54307p0s2 Jun 20 18:50:56.394179 waagent[1932]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 18:50:56.394179 waagent[1932]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 18:50:56.394179 waagent[1932]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 18:50:56.394179 waagent[1932]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 18:50:56.394179 waagent[1932]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 18:50:56.394179 waagent[1932]: 2: eth0 inet6 fe80::7eed:8dff:fe2d:1e4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:50:56.394179 waagent[1932]: 3: enP54307s1 inet6 fe80::7eed:8dff:fe2d:1e4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:50:56.425715 waagent[1932]: 2025-06-20T18:50:56.425643Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 20 18:50:56.425715 waagent[1932]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:50:56.425715 waagent[1932]: pkts bytes target prot opt in out source destination Jun 20 18:50:56.425715 waagent[1932]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:50:56.425715 waagent[1932]: pkts bytes target prot opt in out source destination Jun 20 18:50:56.425715 waagent[1932]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:50:56.425715 waagent[1932]: pkts bytes target prot opt in out source destination Jun 20 18:50:56.425715 waagent[1932]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:50:56.425715 waagent[1932]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:50:56.425715 waagent[1932]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:50:56.429093 waagent[1932]: 2025-06-20T18:50:56.429034Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 18:50:56.429093 waagent[1932]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:50:56.429093 waagent[1932]: pkts bytes target prot opt in out source destination Jun 20 18:50:56.429093 waagent[1932]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:50:56.429093 waagent[1932]: pkts bytes target prot opt in out source destination Jun 20 18:50:56.429093 waagent[1932]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:50:56.429093 waagent[1932]: pkts bytes target prot opt in out source destination Jun 20 18:50:56.429093 waagent[1932]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:50:56.429093 waagent[1932]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:50:56.429093 waagent[1932]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:50:56.429465 waagent[1932]: 2025-06-20T18:50:56.429336Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 20 18:51:01.974670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:51:01.980148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:02.099983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:02.104409 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:51:02.776130 kubelet[2176]: E0620 18:51:02.776068 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:51:02.779841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:51:02.780097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:51:02.780564 systemd[1]: kubelet.service: Consumed 153ms CPU time, 110.6M memory peak. Jun 20 18:51:04.003743 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:51:04.015592 systemd[1]: Started sshd@0-10.200.8.40:22-10.200.16.10:36194.service - OpenSSH per-connection server daemon (10.200.16.10:36194). Jun 20 18:51:04.800590 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 36194 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:51:04.802205 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:51:04.806438 systemd-logind[1699]: New session 3 of user core. Jun 20 18:51:04.813071 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:51:05.367224 systemd[1]: Started sshd@1-10.200.8.40:22-10.200.16.10:36196.service - OpenSSH per-connection server daemon (10.200.16.10:36196). Jun 20 18:51:05.992104 sshd[2189]: Accepted publickey for core from 10.200.16.10 port 36196 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:51:05.993772 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:51:05.998562 systemd-logind[1699]: New session 4 of user core. Jun 20 18:51:06.006091 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:51:06.435604 sshd[2191]: Connection closed by 10.200.16.10 port 36196 Jun 20 18:51:06.436635 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Jun 20 18:51:06.439426 systemd[1]: sshd@1-10.200.8.40:22-10.200.16.10:36196.service: Deactivated successfully. Jun 20 18:51:06.441542 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:51:06.443013 systemd-logind[1699]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:51:06.443955 systemd-logind[1699]: Removed session 4. Jun 20 18:51:06.598247 systemd[1]: Started sshd@2-10.200.8.40:22-10.200.16.10:36208.service - OpenSSH per-connection server daemon (10.200.16.10:36208). Jun 20 18:51:07.241867 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 36208 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:51:07.248488 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:51:07.252816 systemd-logind[1699]: New session 5 of user core. Jun 20 18:51:07.261069 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:51:07.730335 sshd[2199]: Connection closed by 10.200.16.10 port 36208 Jun 20 18:51:07.731218 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Jun 20 18:51:07.734701 systemd[1]: sshd@2-10.200.8.40:22-10.200.16.10:36208.service: Deactivated successfully. Jun 20 18:51:07.736964 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:51:07.738503 systemd-logind[1699]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:51:07.739573 systemd-logind[1699]: Removed session 5. Jun 20 18:51:07.846258 systemd[1]: Started sshd@3-10.200.8.40:22-10.200.16.10:36216.service - OpenSSH per-connection server daemon (10.200.16.10:36216). Jun 20 18:51:08.471678 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 36216 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:51:08.473301 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:51:08.477624 systemd-logind[1699]: New session 6 of user core. Jun 20 18:51:08.485095 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:51:08.920154 sshd[2207]: Connection closed by 10.200.16.10 port 36216 Jun 20 18:51:08.921082 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Jun 20 18:51:08.924460 systemd[1]: sshd@3-10.200.8.40:22-10.200.16.10:36216.service: Deactivated successfully. Jun 20 18:51:08.926642 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:51:08.928218 systemd-logind[1699]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:51:08.929219 systemd-logind[1699]: Removed session 6. Jun 20 18:51:09.032043 systemd[1]: Started sshd@4-10.200.8.40:22-10.200.16.10:37660.service - OpenSSH per-connection server daemon (10.200.16.10:37660). Jun 20 18:51:09.659796 sshd[2213]: Accepted publickey for core from 10.200.16.10 port 37660 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:51:09.661308 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:51:09.666123 systemd-logind[1699]: New session 7 of user core. Jun 20 18:51:09.673226 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:51:10.122686 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:51:10.123107 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:51:10.150485 sudo[2216]: pam_unix(sudo:session): session closed for user root Jun 20 18:51:10.250512 sshd[2215]: Connection closed by 10.200.16.10 port 37660 Jun 20 18:51:10.251636 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Jun 20 18:51:10.255208 systemd[1]: sshd@4-10.200.8.40:22-10.200.16.10:37660.service: Deactivated successfully. Jun 20 18:51:10.257354 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:51:10.258880 systemd-logind[1699]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:51:10.260192 systemd-logind[1699]: Removed session 7. Jun 20 18:51:10.365231 systemd[1]: Started sshd@5-10.200.8.40:22-10.200.16.10:37664.service - OpenSSH per-connection server daemon (10.200.16.10:37664). Jun 20 18:51:10.989809 sshd[2222]: Accepted publickey for core from 10.200.16.10 port 37664 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:51:10.991350 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:51:10.995627 systemd-logind[1699]: New session 8 of user core. Jun 20 18:51:11.004074 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:51:11.334127 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:51:11.334489 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:51:11.338115 sudo[2226]: pam_unix(sudo:session): session closed for user root Jun 20 18:51:11.343580 sudo[2225]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:51:11.343945 sudo[2225]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:51:11.357334 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:51:11.384141 augenrules[2248]: No rules Jun 20 18:51:11.385565 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:51:11.385819 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:51:11.387364 sudo[2225]: pam_unix(sudo:session): session closed for user root Jun 20 18:51:11.487371 sshd[2224]: Connection closed by 10.200.16.10 port 37664 Jun 20 18:51:11.488173 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Jun 20 18:51:11.491738 systemd[1]: sshd@5-10.200.8.40:22-10.200.16.10:37664.service: Deactivated successfully. Jun 20 18:51:11.493789 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:51:11.495475 systemd-logind[1699]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:51:11.496447 systemd-logind[1699]: Removed session 8. Jun 20 18:51:11.603252 systemd[1]: Started sshd@6-10.200.8.40:22-10.200.16.10:37668.service - OpenSSH per-connection server daemon (10.200.16.10:37668). Jun 20 18:51:12.227040 sshd[2257]: Accepted publickey for core from 10.200.16.10 port 37668 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:51:12.228479 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:51:12.232729 systemd-logind[1699]: New session 9 of user core. Jun 20 18:51:12.241076 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:51:12.571063 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:51:12.571341 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:51:12.974546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:51:12.980151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:13.150256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:13.154500 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:51:13.307729 chronyd[1702]: Selected source PHC0 Jun 20 18:51:13.847713 kubelet[2277]: E0620 18:51:13.847617 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:51:13.850023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:51:13.850257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:51:13.850777 systemd[1]: kubelet.service: Consumed 152ms CPU time, 110.5M memory peak. Jun 20 18:51:15.194244 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:51:15.195245 (dockerd)[2293]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:51:17.410663 dockerd[2293]: time="2025-06-20T18:51:17.410495919Z" level=info msg="Starting up" Jun 20 18:51:17.744335 dockerd[2293]: time="2025-06-20T18:51:17.744286419Z" level=info msg="Loading containers: start." Jun 20 18:51:17.943973 kernel: Initializing XFRM netlink socket Jun 20 18:51:18.069265 systemd-networkd[1330]: docker0: Link UP Jun 20 18:51:18.131081 dockerd[2293]: time="2025-06-20T18:51:18.131038719Z" level=info msg="Loading containers: done." Jun 20 18:51:18.152679 dockerd[2293]: time="2025-06-20T18:51:18.152623719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:51:18.152844 dockerd[2293]: time="2025-06-20T18:51:18.152730819Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:51:18.152899 dockerd[2293]: time="2025-06-20T18:51:18.152850919Z" level=info msg="Daemon has completed initialization" Jun 20 18:51:18.204009 dockerd[2293]: time="2025-06-20T18:51:18.203658619Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:51:18.203864 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:51:19.578989 containerd[1726]: time="2025-06-20T18:51:19.578951519Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 18:51:20.231990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430677476.mount: Deactivated successfully. Jun 20 18:51:22.029365 containerd[1726]: time="2025-06-20T18:51:22.029308436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:22.035748 containerd[1726]: time="2025-06-20T18:51:22.035680553Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jun 20 18:51:22.041747 containerd[1726]: time="2025-06-20T18:51:22.041685463Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:22.045654 containerd[1726]: time="2025-06-20T18:51:22.045602535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:22.046801 containerd[1726]: time="2025-06-20T18:51:22.046569152Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.467579933s" Jun 20 18:51:22.046801 containerd[1726]: time="2025-06-20T18:51:22.046613353Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 18:51:22.047435 containerd[1726]: time="2025-06-20T18:51:22.047404268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 18:51:23.940898 containerd[1726]: time="2025-06-20T18:51:23.940840645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:23.943819 containerd[1726]: time="2025-06-20T18:51:23.943762299Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jun 20 18:51:23.947523 containerd[1726]: time="2025-06-20T18:51:23.947450566Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:23.953761 containerd[1726]: time="2025-06-20T18:51:23.953707381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:23.954719 containerd[1726]: time="2025-06-20T18:51:23.954685099Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.907241031s" Jun 20 18:51:23.954965 containerd[1726]: time="2025-06-20T18:51:23.954829102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 18:51:23.955794 containerd[1726]: time="2025-06-20T18:51:23.955586715Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 18:51:23.974436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:51:23.983157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:24.102838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:24.115276 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:51:24.790041 kubelet[2543]: E0620 18:51:24.789964 2543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:51:24.792265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:51:24.792484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:51:24.792986 systemd[1]: kubelet.service: Consumed 148ms CPU time, 110.6M memory peak. Jun 20 18:51:26.470471 containerd[1726]: time="2025-06-20T18:51:26.470397673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:26.473665 containerd[1726]: time="2025-06-20T18:51:26.473577831Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jun 20 18:51:26.478279 containerd[1726]: time="2025-06-20T18:51:26.478219216Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:26.486141 containerd[1726]: time="2025-06-20T18:51:26.486088261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:26.487292 containerd[1726]: time="2025-06-20T18:51:26.487141080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.531516664s" Jun 20 18:51:26.487292 containerd[1726]: time="2025-06-20T18:51:26.487182481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 18:51:26.488339 containerd[1726]: time="2025-06-20T18:51:26.488299301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 18:51:27.734914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385855893.mount: Deactivated successfully. Jun 20 18:51:28.310508 containerd[1726]: time="2025-06-20T18:51:28.310448330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:28.315903 containerd[1726]: time="2025-06-20T18:51:28.315831662Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jun 20 18:51:28.319977 containerd[1726]: time="2025-06-20T18:51:28.319889586Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:28.325418 containerd[1726]: time="2025-06-20T18:51:28.325346919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:28.326884 containerd[1726]: time="2025-06-20T18:51:28.326313124Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.837928922s" Jun 20 18:51:28.326884 containerd[1726]: time="2025-06-20T18:51:28.326361025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 18:51:28.327219 containerd[1726]: time="2025-06-20T18:51:28.327195030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 18:51:28.888849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297792583.mount: Deactivated successfully. Jun 20 18:51:30.268796 containerd[1726]: time="2025-06-20T18:51:30.268734821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:30.271365 containerd[1726]: time="2025-06-20T18:51:30.271286868Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 20 18:51:30.274384 containerd[1726]: time="2025-06-20T18:51:30.274314224Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:30.278974 containerd[1726]: time="2025-06-20T18:51:30.278906109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:30.280013 containerd[1726]: time="2025-06-20T18:51:30.279980428Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.952640797s" Jun 20 18:51:30.280254 containerd[1726]: time="2025-06-20T18:51:30.280128431Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 18:51:30.281023 containerd[1726]: time="2025-06-20T18:51:30.280781843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:51:30.875782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815419317.mount: Deactivated successfully. Jun 20 18:51:30.897315 containerd[1726]: time="2025-06-20T18:51:30.897264088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:30.899614 containerd[1726]: time="2025-06-20T18:51:30.899552530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 18:51:30.905028 containerd[1726]: time="2025-06-20T18:51:30.904973130Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:30.916311 containerd[1726]: time="2025-06-20T18:51:30.916243638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:30.917150 containerd[1726]: time="2025-06-20T18:51:30.916977251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 636.089206ms" Jun 20 18:51:30.917150 containerd[1726]: time="2025-06-20T18:51:30.917025852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 18:51:30.918004 containerd[1726]: time="2025-06-20T18:51:30.917790466Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 18:51:31.491813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647312578.mount: Deactivated successfully. Jun 20 18:51:32.960944 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 18:51:33.954865 containerd[1726]: time="2025-06-20T18:51:33.954799656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:33.968994 containerd[1726]: time="2025-06-20T18:51:33.968903716Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jun 20 18:51:33.975404 containerd[1726]: time="2025-06-20T18:51:33.975339434Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:33.982227 containerd[1726]: time="2025-06-20T18:51:33.982159260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:51:33.983425 containerd[1726]: time="2025-06-20T18:51:33.983255380Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.065430214s" Jun 20 18:51:33.983425 containerd[1726]: time="2025-06-20T18:51:33.983295881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 18:51:34.686172 update_engine[1703]: I20250620 18:51:34.686062 1703 update_attempter.cc:509] Updating boot flags... Jun 20 18:51:34.738948 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2695) Jun 20 18:51:34.821551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:51:34.880143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:35.097555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:35.101726 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:51:35.648094 kubelet[2762]: E0620 18:51:35.648044 2762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:51:35.650850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:51:35.651355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:51:35.652343 systemd[1]: kubelet.service: Consumed 163ms CPU time, 112.3M memory peak. Jun 20 18:51:37.994745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:37.994998 systemd[1]: kubelet.service: Consumed 163ms CPU time, 112.3M memory peak. Jun 20 18:51:38.008223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:38.038817 systemd[1]: Reload requested from client PID 2779 ('systemctl') (unit session-9.scope)... Jun 20 18:51:38.039043 systemd[1]: Reloading... Jun 20 18:51:38.181966 zram_generator::config[2822]: No configuration found. Jun 20 18:51:38.317699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:51:38.437383 systemd[1]: Reloading finished in 397 ms. Jun 20 18:51:38.490805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:38.502384 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:51:38.505666 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:38.506193 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:51:38.506398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:38.506438 systemd[1]: kubelet.service: Consumed 126ms CPU time, 99.4M memory peak. Jun 20 18:51:38.510447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:38.826982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:38.842273 (kubelet)[2899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:51:38.875260 kubelet[2899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:51:38.875260 kubelet[2899]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 18:51:38.875260 kubelet[2899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:51:38.875709 kubelet[2899]: I0620 18:51:38.875322 2899 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:51:39.629680 kubelet[2899]: I0620 18:51:39.629302 2899 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 18:51:39.629680 kubelet[2899]: I0620 18:51:39.629336 2899 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:51:39.629680 kubelet[2899]: I0620 18:51:39.629671 2899 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 18:51:39.691236 kubelet[2899]: E0620 18:51:39.691168 2899 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:39.719110 kubelet[2899]: I0620 18:51:39.719062 2899 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:51:39.729768 kubelet[2899]: E0620 18:51:39.729718 2899 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:51:39.729768 kubelet[2899]: I0620 18:51:39.729755 2899 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:51:39.734334 kubelet[2899]: I0620 18:51:39.734311 2899 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:51:39.734453 kubelet[2899]: I0620 18:51:39.734438 2899 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 18:51:39.734609 kubelet[2899]: I0620 18:51:39.734582 2899 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:51:39.734842 kubelet[2899]: I0620 18:51:39.734609 2899 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-e7ad40a4c3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:51:39.735027 kubelet[2899]: I0620 18:51:39.734856 2899 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:51:39.735027 kubelet[2899]: I0620 18:51:39.734869 2899 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 18:51:39.735117 kubelet[2899]: I0620 18:51:39.735026 2899 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:51:39.738068 kubelet[2899]: I0620 18:51:39.737790 2899 kubelet.go:408] "Attempting to sync node with API server" Jun 20 18:51:39.738068 kubelet[2899]: I0620 18:51:39.737821 2899 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:51:39.738068 kubelet[2899]: I0620 18:51:39.737860 2899 kubelet.go:314] "Adding apiserver pod source" Jun 20 18:51:39.738068 kubelet[2899]: I0620 18:51:39.737879 2899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:51:39.740887 kubelet[2899]: I0620 18:51:39.740864 2899 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:51:39.742210 kubelet[2899]: I0620 18:51:39.741394 2899 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:51:39.742210 kubelet[2899]: W0620 18:51:39.741455 2899 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:51:39.743875 kubelet[2899]: I0620 18:51:39.743671 2899 server.go:1274] "Started kubelet" Jun 20 18:51:39.743875 kubelet[2899]: W0620 18:51:39.743834 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-e7ad40a4c3&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:39.744022 kubelet[2899]: E0620 18:51:39.743899 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-e7ad40a4c3&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:39.748877 kubelet[2899]: W0620 18:51:39.748618 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:39.748877 kubelet[2899]: E0620 18:51:39.748677 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:39.750939 kubelet[2899]: I0620 18:51:39.749656 2899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:51:39.750939 kubelet[2899]: I0620 18:51:39.750250 2899 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:51:39.750939 kubelet[2899]: I0620 18:51:39.750342 2899 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:51:39.752110 kubelet[2899]: I0620 18:51:39.752092 2899 server.go:449] "Adding debug handlers to kubelet server" Jun 20 18:51:39.754948 kubelet[2899]: I0620 18:51:39.754890 2899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:51:39.755261 kubelet[2899]: I0620 18:51:39.755241 2899 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:51:39.757844 kubelet[2899]: I0620 18:51:39.757691 2899 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 18:51:39.757977 kubelet[2899]: E0620 18:51:39.757907 2899 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.0-a-e7ad40a4c3\" not found" Jun 20 18:51:39.758114 kubelet[2899]: E0620 18:51:39.756579 2899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.0-a-e7ad40a4c3.184ad4e862e6a4eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-a-e7ad40a4c3,UID:ci-4230.2.0-a-e7ad40a4c3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-a-e7ad40a4c3,},FirstTimestamp:2025-06-20 18:51:39.743642859 +0000 UTC m=+0.898106872,LastTimestamp:2025-06-20 18:51:39.743642859 +0000 UTC m=+0.898106872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-a-e7ad40a4c3,}" Jun 20 18:51:39.760006 kubelet[2899]: E0620 18:51:39.759963 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-e7ad40a4c3?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="200ms" Jun 20 18:51:39.760394 kubelet[2899]: I0620 18:51:39.760368 2899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:51:39.761146 kubelet[2899]: I0620 18:51:39.761123 2899 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 18:51:39.761284 kubelet[2899]: I0620 18:51:39.761181 2899 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:51:39.762325 kubelet[2899]: E0620 18:51:39.762306 2899 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:51:39.762948 kubelet[2899]: I0620 18:51:39.762855 2899 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:51:39.762948 kubelet[2899]: I0620 18:51:39.762873 2899 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:51:39.766763 kubelet[2899]: W0620 18:51:39.766691 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:39.766898 kubelet[2899]: E0620 18:51:39.766878 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:39.787765 kubelet[2899]: I0620 18:51:39.787711 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:51:39.789266 kubelet[2899]: I0620 18:51:39.789243 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:51:39.789546 kubelet[2899]: I0620 18:51:39.789375 2899 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 18:51:39.789546 kubelet[2899]: I0620 18:51:39.789404 2899 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 18:51:39.789546 kubelet[2899]: E0620 18:51:39.789450 2899 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:51:39.794996 kubelet[2899]: I0620 18:51:39.794884 2899 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 18:51:39.794996 kubelet[2899]: I0620 18:51:39.794929 2899 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 18:51:39.794996 kubelet[2899]: I0620 18:51:39.794948 2899 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:51:39.797513 kubelet[2899]: W0620 18:51:39.797477 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:39.800713 kubelet[2899]: E0620 18:51:39.797517 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:39.804113 kubelet[2899]: I0620 18:51:39.804092 2899 policy_none.go:49] "None policy: Start" Jun 20 18:51:39.805807 kubelet[2899]: I0620 18:51:39.804656 2899 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 18:51:39.805807 kubelet[2899]: I0620 18:51:39.804679 2899 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:51:39.816096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:51:39.826777 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:51:39.830222 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:51:39.835868 kubelet[2899]: I0620 18:51:39.835638 2899 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:51:39.835868 kubelet[2899]: I0620 18:51:39.835860 2899 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:51:39.836055 kubelet[2899]: I0620 18:51:39.835876 2899 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:51:39.836248 kubelet[2899]: I0620 18:51:39.836227 2899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:51:39.838675 kubelet[2899]: E0620 18:51:39.838620 2899 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.0-a-e7ad40a4c3\" not found" Jun 20 18:51:39.900849 systemd[1]: Created slice kubepods-burstable-pode3becabd56468adccccd09fdcb8c449f.slice - libcontainer container kubepods-burstable-pode3becabd56468adccccd09fdcb8c449f.slice. Jun 20 18:51:39.912564 systemd[1]: Created slice kubepods-burstable-podd9e7dc4f7ac8c899ad0b2686b35d3ab2.slice - libcontainer container kubepods-burstable-podd9e7dc4f7ac8c899ad0b2686b35d3ab2.slice. Jun 20 18:51:39.922802 systemd[1]: Created slice kubepods-burstable-pod9c5e5b2bd8e1d3581db9e4908a65e1ec.slice - libcontainer container kubepods-burstable-pod9c5e5b2bd8e1d3581db9e4908a65e1ec.slice. Jun 20 18:51:39.938170 kubelet[2899]: I0620 18:51:39.938134 2899 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.938616 kubelet[2899]: E0620 18:51:39.938589 2899 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.961376 kubelet[2899]: E0620 18:51:39.961295 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-e7ad40a4c3?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="400ms" Jun 20 18:51:39.962595 kubelet[2899]: I0620 18:51:39.962556 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9e7dc4f7ac8c899ad0b2686b35d3ab2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"d9e7dc4f7ac8c899ad0b2686b35d3ab2\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.962723 kubelet[2899]: I0620 18:51:39.962599 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.962723 kubelet[2899]: I0620 18:51:39.962632 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.962723 kubelet[2899]: I0620 18:51:39.962660 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.962723 kubelet[2899]: I0620 18:51:39.962688 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9e7dc4f7ac8c899ad0b2686b35d3ab2-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"d9e7dc4f7ac8c899ad0b2686b35d3ab2\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.962723 kubelet[2899]: I0620 18:51:39.962713 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9e7dc4f7ac8c899ad0b2686b35d3ab2-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"d9e7dc4f7ac8c899ad0b2686b35d3ab2\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.963024 kubelet[2899]: I0620 18:51:39.962742 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.963024 kubelet[2899]: I0620 18:51:39.962773 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:39.963024 kubelet[2899]: I0620 18:51:39.962803 2899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3becabd56468adccccd09fdcb8c449f-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"e3becabd56468adccccd09fdcb8c449f\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:40.140800 kubelet[2899]: I0620 18:51:40.140765 2899 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:40.141191 kubelet[2899]: E0620 18:51:40.141160 2899 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:40.211582 containerd[1726]: time="2025-06-20T18:51:40.211449504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-e7ad40a4c3,Uid:e3becabd56468adccccd09fdcb8c449f,Namespace:kube-system,Attempt:0,}" Jun 20 18:51:40.222404 containerd[1726]: time="2025-06-20T18:51:40.222361927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-e7ad40a4c3,Uid:d9e7dc4f7ac8c899ad0b2686b35d3ab2,Namespace:kube-system,Attempt:0,}" Jun 20 18:51:40.226370 containerd[1726]: time="2025-06-20T18:51:40.226041768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3,Uid:9c5e5b2bd8e1d3581db9e4908a65e1ec,Namespace:kube-system,Attempt:0,}" Jun 20 18:51:40.362161 kubelet[2899]: E0620 18:51:40.362111 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-e7ad40a4c3?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="800ms" Jun 20 18:51:40.543753 kubelet[2899]: I0620 18:51:40.543723 2899 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:40.544110 kubelet[2899]: E0620 18:51:40.544078 2899 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:40.642983 kubelet[2899]: W0620 18:51:40.642894 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:40.643134 kubelet[2899]: E0620 18:51:40.642994 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:40.833540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428157009.mount: Deactivated successfully. Jun 20 18:51:40.891344 containerd[1726]: time="2025-06-20T18:51:40.891277028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:51:40.906452 containerd[1726]: time="2025-06-20T18:51:40.906264196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 20 18:51:40.911197 containerd[1726]: time="2025-06-20T18:51:40.911147851Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:51:40.916168 containerd[1726]: time="2025-06-20T18:51:40.916127706Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:51:40.922442 containerd[1726]: time="2025-06-20T18:51:40.922112573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:51:40.926086 containerd[1726]: time="2025-06-20T18:51:40.926047118Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:51:40.931065 containerd[1726]: time="2025-06-20T18:51:40.930878872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:51:40.931980 containerd[1726]: time="2025-06-20T18:51:40.931950284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 720.384078ms" Jun 20 18:51:40.934472 containerd[1726]: time="2025-06-20T18:51:40.934386611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:51:40.942056 containerd[1726]: time="2025-06-20T18:51:40.942021497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 715.883928ms" Jun 20 18:51:41.017054 containerd[1726]: time="2025-06-20T18:51:41.017011438Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 794.531509ms" Jun 20 18:51:41.163549 kubelet[2899]: E0620 18:51:41.163414 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-e7ad40a4c3?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="1.6s" Jun 20 18:51:41.203417 kubelet[2899]: W0620 18:51:41.203312 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:41.203417 kubelet[2899]: E0620 18:51:41.203372 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:41.218573 kubelet[2899]: W0620 18:51:41.217157 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-e7ad40a4c3&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:41.218573 kubelet[2899]: E0620 18:51:41.217261 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-e7ad40a4c3&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:41.341082 kubelet[2899]: W0620 18:51:41.341034 2899 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 20 18:51:41.341273 kubelet[2899]: E0620 18:51:41.341091 2899 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:41.345986 kubelet[2899]: I0620 18:51:41.345950 2899 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:41.346350 kubelet[2899]: E0620 18:51:41.346313 2899 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:41.613469 containerd[1726]: time="2025-06-20T18:51:41.613271824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:51:41.613469 containerd[1726]: time="2025-06-20T18:51:41.613372225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:51:41.613469 containerd[1726]: time="2025-06-20T18:51:41.613391025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:41.615361 containerd[1726]: time="2025-06-20T18:51:41.614908642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:41.617965 containerd[1726]: time="2025-06-20T18:51:41.617701274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:51:41.617965 containerd[1726]: time="2025-06-20T18:51:41.617749974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:51:41.617965 containerd[1726]: time="2025-06-20T18:51:41.617772874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:41.617965 containerd[1726]: time="2025-06-20T18:51:41.617612473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:51:41.617965 containerd[1726]: time="2025-06-20T18:51:41.617667573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:51:41.617965 containerd[1726]: time="2025-06-20T18:51:41.617689773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:41.617965 containerd[1726]: time="2025-06-20T18:51:41.617770774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:41.620191 containerd[1726]: time="2025-06-20T18:51:41.619939299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:41.658138 systemd[1]: Started cri-containerd-1bde80d273960c7d17380676c93adbe27dca5e68b388dc33a735a8948f8afa2e.scope - libcontainer container 1bde80d273960c7d17380676c93adbe27dca5e68b388dc33a735a8948f8afa2e. Jun 20 18:51:41.664553 systemd[1]: Started cri-containerd-2dd1427bccd21e29c670c2957409ecf159d95a68332bd64164146f102becf471.scope - libcontainer container 2dd1427bccd21e29c670c2957409ecf159d95a68332bd64164146f102becf471. Jun 20 18:51:41.671129 systemd[1]: Started cri-containerd-a83b3629b57c3cdddb8aad20d52af646d572fd77dfd07631ab908446117d316b.scope - libcontainer container a83b3629b57c3cdddb8aad20d52af646d572fd77dfd07631ab908446117d316b. Jun 20 18:51:41.705458 kubelet[2899]: E0620 18:51:41.705403 2899 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:51:41.736180 containerd[1726]: time="2025-06-20T18:51:41.735524595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3,Uid:9c5e5b2bd8e1d3581db9e4908a65e1ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bde80d273960c7d17380676c93adbe27dca5e68b388dc33a735a8948f8afa2e\"" Jun 20 18:51:41.746724 containerd[1726]: time="2025-06-20T18:51:41.746621019Z" level=info msg="CreateContainer within sandbox \"1bde80d273960c7d17380676c93adbe27dca5e68b388dc33a735a8948f8afa2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:51:41.757024 containerd[1726]: time="2025-06-20T18:51:41.756937435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-e7ad40a4c3,Uid:e3becabd56468adccccd09fdcb8c449f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dd1427bccd21e29c670c2957409ecf159d95a68332bd64164146f102becf471\"" Jun 20 18:51:41.760764 containerd[1726]: time="2025-06-20T18:51:41.760637476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-e7ad40a4c3,Uid:d9e7dc4f7ac8c899ad0b2686b35d3ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a83b3629b57c3cdddb8aad20d52af646d572fd77dfd07631ab908446117d316b\"" Jun 20 18:51:41.761250 containerd[1726]: time="2025-06-20T18:51:41.761212283Z" level=info msg="CreateContainer within sandbox \"2dd1427bccd21e29c670c2957409ecf159d95a68332bd64164146f102becf471\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:51:41.764478 containerd[1726]: time="2025-06-20T18:51:41.764449819Z" level=info msg="CreateContainer within sandbox \"a83b3629b57c3cdddb8aad20d52af646d572fd77dfd07631ab908446117d316b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:51:41.841396 containerd[1726]: time="2025-06-20T18:51:41.841351981Z" level=info msg="CreateContainer within sandbox \"1bde80d273960c7d17380676c93adbe27dca5e68b388dc33a735a8948f8afa2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3dabb8b93383d4216d52947c2cb360ce1ea4a137fde4c199582f37747df101dc\"" Jun 20 18:51:41.842181 containerd[1726]: time="2025-06-20T18:51:41.842145590Z" level=info msg="StartContainer for \"3dabb8b93383d4216d52947c2cb360ce1ea4a137fde4c199582f37747df101dc\"" Jun 20 18:51:41.856952 containerd[1726]: time="2025-06-20T18:51:41.856814855Z" level=info msg="CreateContainer within sandbox \"2dd1427bccd21e29c670c2957409ecf159d95a68332bd64164146f102becf471\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0349275088ee380331fa522b08fc8ec8b6dc42f7e5dbaca7c6a27b54b90e93b7\"" Jun 20 18:51:41.859969 containerd[1726]: time="2025-06-20T18:51:41.857674264Z" level=info msg="StartContainer for \"0349275088ee380331fa522b08fc8ec8b6dc42f7e5dbaca7c6a27b54b90e93b7\"" Jun 20 18:51:41.865768 containerd[1726]: time="2025-06-20T18:51:41.862554719Z" level=info msg="CreateContainer within sandbox \"a83b3629b57c3cdddb8aad20d52af646d572fd77dfd07631ab908446117d316b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cfbc1f129fdb51534c989474a2ecd338999c615066e65dfa0cd4e13174511b2b\"" Jun 20 18:51:41.865768 containerd[1726]: time="2025-06-20T18:51:41.864609842Z" level=info msg="StartContainer for \"cfbc1f129fdb51534c989474a2ecd338999c615066e65dfa0cd4e13174511b2b\"" Jun 20 18:51:41.893109 systemd[1]: Started cri-containerd-3dabb8b93383d4216d52947c2cb360ce1ea4a137fde4c199582f37747df101dc.scope - libcontainer container 3dabb8b93383d4216d52947c2cb360ce1ea4a137fde4c199582f37747df101dc. Jun 20 18:51:41.938117 systemd[1]: Started cri-containerd-cfbc1f129fdb51534c989474a2ecd338999c615066e65dfa0cd4e13174511b2b.scope - libcontainer container cfbc1f129fdb51534c989474a2ecd338999c615066e65dfa0cd4e13174511b2b. Jun 20 18:51:41.953397 systemd[1]: Started cri-containerd-0349275088ee380331fa522b08fc8ec8b6dc42f7e5dbaca7c6a27b54b90e93b7.scope - libcontainer container 0349275088ee380331fa522b08fc8ec8b6dc42f7e5dbaca7c6a27b54b90e93b7. Jun 20 18:51:43.529307 kubelet[2899]: I0620 18:51:43.529270 2899 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:44.032838 containerd[1726]: time="2025-06-20T18:51:44.032424851Z" level=info msg="StartContainer for \"0349275088ee380331fa522b08fc8ec8b6dc42f7e5dbaca7c6a27b54b90e93b7\" returns successfully" Jun 20 18:51:44.032838 containerd[1726]: time="2025-06-20T18:51:44.032684454Z" level=info msg="StartContainer for \"3dabb8b93383d4216d52947c2cb360ce1ea4a137fde4c199582f37747df101dc\" returns successfully" Jun 20 18:51:44.032838 containerd[1726]: time="2025-06-20T18:51:44.032767155Z" level=info msg="StartContainer for \"cfbc1f129fdb51534c989474a2ecd338999c615066e65dfa0cd4e13174511b2b\" returns successfully" Jun 20 18:51:44.828077 kubelet[2899]: I0620 18:51:44.827682 2899 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:44.828077 kubelet[2899]: E0620 18:51:44.827723 2899 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.2.0-a-e7ad40a4c3\": node \"ci-4230.2.0-a-e7ad40a4c3\" not found" Jun 20 18:51:45.052758 kubelet[2899]: E0620 18:51:45.052517 2899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:45.052758 kubelet[2899]: E0620 18:51:45.052543 2899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.2.0-a-e7ad40a4c3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:45.052758 kubelet[2899]: E0620 18:51:45.052517 2899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:45.824078 kubelet[2899]: I0620 18:51:45.823851 2899 apiserver.go:52] "Watching apiserver" Jun 20 18:51:45.861800 kubelet[2899]: I0620 18:51:45.861235 2899 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 18:51:46.057686 kubelet[2899]: W0620 18:51:46.057115 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:51:46.618907 systemd[1]: Reload requested from client PID 3176 ('systemctl') (unit session-9.scope)... Jun 20 18:51:46.618941 systemd[1]: Reloading... Jun 20 18:51:46.736952 zram_generator::config[3232]: No configuration found. Jun 20 18:51:46.864831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:51:47.010771 systemd[1]: Reloading finished in 391 ms. Jun 20 18:51:47.041184 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:47.059605 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:51:47.059946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:47.060017 systemd[1]: kubelet.service: Consumed 656ms CPU time, 132.7M memory peak. Jun 20 18:51:47.067554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:51:47.237132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:51:47.247280 (kubelet)[3290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:51:47.902536 kubelet[3290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:51:47.902536 kubelet[3290]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 18:51:47.902536 kubelet[3290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:51:47.902536 kubelet[3290]: I0620 18:51:47.902132 3290 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:51:47.909119 kubelet[3290]: I0620 18:51:47.909086 3290 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 18:51:47.909119 kubelet[3290]: I0620 18:51:47.909114 3290 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:51:47.910048 kubelet[3290]: I0620 18:51:47.909449 3290 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 18:51:47.911092 kubelet[3290]: I0620 18:51:47.911068 3290 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 18:51:47.913524 kubelet[3290]: I0620 18:51:47.913494 3290 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:51:47.917882 kubelet[3290]: E0620 18:51:47.917847 3290 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:51:47.918086 kubelet[3290]: I0620 18:51:47.917886 3290 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:51:47.922189 kubelet[3290]: I0620 18:51:47.922057 3290 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:51:47.926058 kubelet[3290]: I0620 18:51:47.922239 3290 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 18:51:47.926058 kubelet[3290]: I0620 18:51:47.922411 3290 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:51:47.926058 kubelet[3290]: I0620 18:51:47.922452 3290 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-e7ad40a4c3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:51:47.926058 kubelet[3290]: I0620 18:51:47.922764 3290 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:51:47.926311 kubelet[3290]: I0620 18:51:47.922775 3290 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 18:51:47.926311 kubelet[3290]: I0620 18:51:47.922808 3290 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:51:47.926311 kubelet[3290]: I0620 18:51:47.922944 3290 kubelet.go:408] "Attempting to sync node with API server" Jun 20 18:51:47.926311 kubelet[3290]: I0620 18:51:47.922959 3290 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:51:47.926311 kubelet[3290]: I0620 18:51:47.922994 3290 kubelet.go:314] "Adding apiserver pod source" Jun 20 18:51:47.926311 kubelet[3290]: I0620 18:51:47.923006 3290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:51:47.929343 kubelet[3290]: I0620 18:51:47.929324 3290 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:51:47.930197 kubelet[3290]: I0620 18:51:47.930180 3290 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:51:47.931678 kubelet[3290]: I0620 18:51:47.930954 3290 server.go:1274] "Started kubelet" Jun 20 18:51:47.938243 kubelet[3290]: I0620 18:51:47.938219 3290 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:51:47.938462 kubelet[3290]: I0620 18:51:47.938443 3290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:51:47.940253 kubelet[3290]: I0620 18:51:47.939570 3290 server.go:449] "Adding debug handlers to kubelet server" Jun 20 18:51:47.943012 kubelet[3290]: I0620 18:51:47.941758 3290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:51:47.943605 kubelet[3290]: I0620 18:51:47.943409 3290 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:51:47.946265 kubelet[3290]: I0620 18:51:47.945065 3290 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 18:51:47.946265 kubelet[3290]: E0620 18:51:47.945716 3290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.0-a-e7ad40a4c3\" not found" Jun 20 18:51:47.949933 kubelet[3290]: I0620 18:51:47.949086 3290 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 18:51:47.949933 kubelet[3290]: I0620 18:51:47.949215 3290 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:51:47.953468 kubelet[3290]: I0620 18:51:47.953294 3290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:51:47.954828 kubelet[3290]: I0620 18:51:47.954780 3290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:51:47.954828 kubelet[3290]: I0620 18:51:47.954815 3290 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 18:51:47.954975 kubelet[3290]: I0620 18:51:47.954834 3290 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 18:51:47.954975 kubelet[3290]: E0620 18:51:47.954879 3290 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:51:47.963871 kubelet[3290]: I0620 18:51:47.961715 3290 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:51:47.977945 kubelet[3290]: I0620 18:51:47.977327 3290 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:51:47.978228 kubelet[3290]: I0620 18:51:47.978126 3290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:51:47.987045 kubelet[3290]: I0620 18:51:47.986971 3290 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:51:47.989257 kubelet[3290]: E0620 18:51:47.989227 3290 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:51:48.040858 kubelet[3290]: I0620 18:51:48.040346 3290 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 18:51:48.040858 kubelet[3290]: I0620 18:51:48.040370 3290 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 18:51:48.040858 kubelet[3290]: I0620 18:51:48.040390 3290 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:51:48.040858 kubelet[3290]: I0620 18:51:48.040562 3290 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:51:48.040858 kubelet[3290]: I0620 18:51:48.040577 3290 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:51:48.040858 kubelet[3290]: I0620 18:51:48.040602 3290 policy_none.go:49] "None policy: Start" Jun 20 18:51:48.041718 kubelet[3290]: I0620 18:51:48.041693 3290 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 18:51:48.041718 kubelet[3290]: I0620 18:51:48.041720 3290 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:51:48.041947 kubelet[3290]: I0620 18:51:48.041908 3290 state_mem.go:75] "Updated machine memory state" Jun 20 18:51:48.047413 kubelet[3290]: I0620 18:51:48.047374 3290 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:51:48.048032 kubelet[3290]: I0620 18:51:48.047890 3290 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:51:48.048032 kubelet[3290]: I0620 18:51:48.047909 3290 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:51:48.049159 kubelet[3290]: I0620 18:51:48.048262 3290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:51:48.069867 kubelet[3290]: W0620 18:51:48.069786 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:51:48.070388 sudo[3322]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:51:48.072002 sudo[3322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:51:48.074693 kubelet[3290]: W0620 18:51:48.074585 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:51:48.077466 kubelet[3290]: W0620 18:51:48.077210 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:51:48.077466 kubelet[3290]: E0620 18:51:48.077277 3290 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.149743 kubelet[3290]: I0620 18:51:48.149704 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9e7dc4f7ac8c899ad0b2686b35d3ab2-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"d9e7dc4f7ac8c899ad0b2686b35d3ab2\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.149911 kubelet[3290]: I0620 18:51:48.149751 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9e7dc4f7ac8c899ad0b2686b35d3ab2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"d9e7dc4f7ac8c899ad0b2686b35d3ab2\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.149911 kubelet[3290]: I0620 18:51:48.149788 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.149911 kubelet[3290]: I0620 18:51:48.149823 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.149911 kubelet[3290]: I0620 18:51:48.149852 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3becabd56468adccccd09fdcb8c449f-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"e3becabd56468adccccd09fdcb8c449f\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.149911 kubelet[3290]: I0620 18:51:48.149875 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9e7dc4f7ac8c899ad0b2686b35d3ab2-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"d9e7dc4f7ac8c899ad0b2686b35d3ab2\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.150136 kubelet[3290]: I0620 18:51:48.149893 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.150136 kubelet[3290]: I0620 18:51:48.149912 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.150221 kubelet[3290]: I0620 18:51:48.150147 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c5e5b2bd8e1d3581db9e4908a65e1ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3\" (UID: \"9c5e5b2bd8e1d3581db9e4908a65e1ec\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.159988 kubelet[3290]: I0620 18:51:48.159779 3290 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.176013 kubelet[3290]: I0620 18:51:48.175591 3290 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.176013 kubelet[3290]: I0620 18:51:48.175672 3290 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:48.615566 sudo[3322]: pam_unix(sudo:session): session closed for user root Jun 20 18:51:48.935782 kubelet[3290]: I0620 18:51:48.935665 3290 apiserver.go:52] "Watching apiserver" Jun 20 18:51:48.949853 kubelet[3290]: I0620 18:51:48.949455 3290 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 18:51:49.022156 kubelet[3290]: W0620 18:51:49.022116 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:51:49.022327 kubelet[3290]: E0620 18:51:49.022210 3290 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.0-a-e7ad40a4c3\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" Jun 20 18:51:49.051668 kubelet[3290]: I0620 18:51:49.051568 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-e7ad40a4c3" podStartSLOduration=1.051529913 podStartE2EDuration="1.051529913s" podCreationTimestamp="2025-06-20 18:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:51:49.039707187 +0000 UTC m=+1.788666332" watchObservedRunningTime="2025-06-20 18:51:49.051529913 +0000 UTC m=+1.800488958" Jun 20 18:51:49.066374 kubelet[3290]: I0620 18:51:49.066302 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.0-a-e7ad40a4c3" podStartSLOduration=1.06627697 podStartE2EDuration="1.06627697s" podCreationTimestamp="2025-06-20 18:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:51:49.051972018 +0000 UTC m=+1.800931063" watchObservedRunningTime="2025-06-20 18:51:49.06627697 +0000 UTC m=+1.815236115" Jun 20 18:51:49.083017 kubelet[3290]: I0620 18:51:49.082956 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.0-a-e7ad40a4c3" podStartSLOduration=3.082932047 podStartE2EDuration="3.082932047s" podCreationTimestamp="2025-06-20 18:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:51:49.066548373 +0000 UTC m=+1.815507418" watchObservedRunningTime="2025-06-20 18:51:49.082932047 +0000 UTC m=+1.831891192" Jun 20 18:51:49.936957 sudo[2260]: pam_unix(sudo:session): session closed for user root Jun 20 18:51:50.036459 sshd[2259]: Connection closed by 10.200.16.10 port 37668 Jun 20 18:51:50.037322 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Jun 20 18:51:50.042305 systemd[1]: sshd@6-10.200.8.40:22-10.200.16.10:37668.service: Deactivated successfully. Jun 20 18:51:50.046405 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:51:50.046663 systemd[1]: session-9.scope: Consumed 4.607s CPU time, 260.9M memory peak. Jun 20 18:51:50.048520 systemd-logind[1699]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:51:50.049589 systemd-logind[1699]: Removed session 9. Jun 20 18:51:51.094157 kubelet[3290]: I0620 18:51:51.094116 3290 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:51:51.095608 containerd[1726]: time="2025-06-20T18:51:51.095337359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:51:51.096154 kubelet[3290]: I0620 18:51:51.095714 3290 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:51:52.047353 systemd[1]: Created slice kubepods-besteffort-pod7545b62e_2d67_4cfb_9061_9d4123359272.slice - libcontainer container kubepods-besteffort-pod7545b62e_2d67_4cfb_9061_9d4123359272.slice. Jun 20 18:51:52.061692 systemd[1]: Created slice kubepods-burstable-pod31dd5f32_f5dc_4042_97d6_b0f7837b8c76.slice - libcontainer container kubepods-burstable-pod31dd5f32_f5dc_4042_97d6_b0f7837b8c76.slice. Jun 20 18:51:52.074181 kubelet[3290]: I0620 18:51:52.073619 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-config-path\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.074560 kubelet[3290]: I0620 18:51:52.074540 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-kernel\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.078425 kubelet[3290]: I0620 18:51:52.078396 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7545b62e-2d67-4cfb-9061-9d4123359272-xtables-lock\") pod \"kube-proxy-tm7jb\" (UID: \"7545b62e-2d67-4cfb-9061-9d4123359272\") " pod="kube-system/kube-proxy-tm7jb" Jun 20 18:51:52.078948 kubelet[3290]: I0620 18:51:52.078613 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hostproc\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.078948 kubelet[3290]: I0620 18:51:52.078842 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-net\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.078948 kubelet[3290]: I0620 18:51:52.078886 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-clustermesh-secrets\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.079537 kubelet[3290]: I0620 18:51:52.078911 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-run\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.079537 kubelet[3290]: I0620 18:51:52.079268 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7545b62e-2d67-4cfb-9061-9d4123359272-lib-modules\") pod \"kube-proxy-tm7jb\" (UID: \"7545b62e-2d67-4cfb-9061-9d4123359272\") " pod="kube-system/kube-proxy-tm7jb" Jun 20 18:51:52.079537 kubelet[3290]: I0620 18:51:52.079464 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-etc-cni-netd\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.079537 kubelet[3290]: I0620 18:51:52.079501 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-xtables-lock\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.080321 kubelet[3290]: I0620 18:51:52.079888 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hubble-tls\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.080321 kubelet[3290]: I0620 18:51:52.080033 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfq5w\" (UniqueName: \"kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-kube-api-access-rfq5w\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.080638 kubelet[3290]: I0620 18:51:52.080071 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqsm\" (UniqueName: \"kubernetes.io/projected/7545b62e-2d67-4cfb-9061-9d4123359272-kube-api-access-zcqsm\") pod \"kube-proxy-tm7jb\" (UID: \"7545b62e-2d67-4cfb-9061-9d4123359272\") " pod="kube-system/kube-proxy-tm7jb" Jun 20 18:51:52.080638 kubelet[3290]: I0620 18:51:52.080578 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-cgroup\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.080638 kubelet[3290]: I0620 18:51:52.080608 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7545b62e-2d67-4cfb-9061-9d4123359272-kube-proxy\") pod \"kube-proxy-tm7jb\" (UID: \"7545b62e-2d67-4cfb-9061-9d4123359272\") " pod="kube-system/kube-proxy-tm7jb" Jun 20 18:51:52.080913 kubelet[3290]: I0620 18:51:52.080783 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-bpf-maps\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.080913 kubelet[3290]: I0620 18:51:52.080820 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-lib-modules\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.080913 kubelet[3290]: I0620 18:51:52.080838 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cni-path\") pod \"cilium-9pjgb\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " pod="kube-system/cilium-9pjgb" Jun 20 18:51:52.148655 systemd[1]: Created slice kubepods-besteffort-pod9c8a9c4e_bd33_4599_9178_85279d02aade.slice - libcontainer container kubepods-besteffort-pod9c8a9c4e_bd33_4599_9178_85279d02aade.slice. Jun 20 18:51:52.181874 kubelet[3290]: I0620 18:51:52.181829 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c8a9c4e-bd33-4599-9178-85279d02aade-cilium-config-path\") pod \"cilium-operator-5d85765b45-pg798\" (UID: \"9c8a9c4e-bd33-4599-9178-85279d02aade\") " pod="kube-system/cilium-operator-5d85765b45-pg798" Jun 20 18:51:52.182347 kubelet[3290]: I0620 18:51:52.182012 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t6l9\" (UniqueName: \"kubernetes.io/projected/9c8a9c4e-bd33-4599-9178-85279d02aade-kube-api-access-4t6l9\") pod \"cilium-operator-5d85765b45-pg798\" (UID: \"9c8a9c4e-bd33-4599-9178-85279d02aade\") " pod="kube-system/cilium-operator-5d85765b45-pg798" Jun 20 18:51:52.360429 containerd[1726]: time="2025-06-20T18:51:52.360293818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm7jb,Uid:7545b62e-2d67-4cfb-9061-9d4123359272,Namespace:kube-system,Attempt:0,}" Jun 20 18:51:52.368043 containerd[1726]: time="2025-06-20T18:51:52.368005000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9pjgb,Uid:31dd5f32-f5dc-4042-97d6-b0f7837b8c76,Namespace:kube-system,Attempt:0,}" Jun 20 18:51:52.455146 containerd[1726]: time="2025-06-20T18:51:52.455085627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pg798,Uid:9c8a9c4e-bd33-4599-9178-85279d02aade,Namespace:kube-system,Attempt:0,}" Jun 20 18:51:52.779717 containerd[1726]: time="2025-06-20T18:51:52.779460678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:51:52.779717 containerd[1726]: time="2025-06-20T18:51:52.779521479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:51:52.779717 containerd[1726]: time="2025-06-20T18:51:52.779542679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:52.779717 containerd[1726]: time="2025-06-20T18:51:52.779646680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:52.809133 systemd[1]: Started cri-containerd-c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761.scope - libcontainer container c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761. Jun 20 18:51:52.818406 containerd[1726]: time="2025-06-20T18:51:52.816214269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:51:52.818406 containerd[1726]: time="2025-06-20T18:51:52.816559773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:51:52.818406 containerd[1726]: time="2025-06-20T18:51:52.816762375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:52.818406 containerd[1726]: time="2025-06-20T18:51:52.818128390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:52.820028 containerd[1726]: time="2025-06-20T18:51:52.819885008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:51:52.820286 containerd[1726]: time="2025-06-20T18:51:52.820233612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:51:52.820380 containerd[1726]: time="2025-06-20T18:51:52.820307313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:52.821103 containerd[1726]: time="2025-06-20T18:51:52.821046921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:51:52.854122 systemd[1]: Started cri-containerd-2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3.scope - libcontainer container 2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3. Jun 20 18:51:52.857021 systemd[1]: Started cri-containerd-c272440386156e8011c3701e6cdaf9275981ac65b167b4743b1cf07f34ec7f84.scope - libcontainer container c272440386156e8011c3701e6cdaf9275981ac65b167b4743b1cf07f34ec7f84. Jun 20 18:51:52.892058 containerd[1726]: time="2025-06-20T18:51:52.892004876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9pjgb,Uid:31dd5f32-f5dc-4042-97d6-b0f7837b8c76,Namespace:kube-system,Attempt:0,} returns sandbox id \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\"" Jun 20 18:51:52.897290 containerd[1726]: time="2025-06-20T18:51:52.897024429Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:51:52.909809 containerd[1726]: time="2025-06-20T18:51:52.909777065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm7jb,Uid:7545b62e-2d67-4cfb-9061-9d4123359272,Namespace:kube-system,Attempt:0,} returns sandbox id \"c272440386156e8011c3701e6cdaf9275981ac65b167b4743b1cf07f34ec7f84\"" Jun 20 18:51:52.913355 containerd[1726]: time="2025-06-20T18:51:52.913173201Z" level=info msg="CreateContainer within sandbox \"c272440386156e8011c3701e6cdaf9275981ac65b167b4743b1cf07f34ec7f84\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:51:52.932004 containerd[1726]: time="2025-06-20T18:51:52.931966301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pg798,Uid:9c8a9c4e-bd33-4599-9178-85279d02aade,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\"" Jun 20 18:51:52.963556 containerd[1726]: time="2025-06-20T18:51:52.963504136Z" level=info msg="CreateContainer within sandbox \"c272440386156e8011c3701e6cdaf9275981ac65b167b4743b1cf07f34ec7f84\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"857328f91f54251b560e24d47e634c555723510e36d6ea61f7bf45621adf4564\"" Jun 20 18:51:52.964225 containerd[1726]: time="2025-06-20T18:51:52.964179744Z" level=info msg="StartContainer for \"857328f91f54251b560e24d47e634c555723510e36d6ea61f7bf45621adf4564\"" Jun 20 18:51:52.995078 systemd[1]: Started cri-containerd-857328f91f54251b560e24d47e634c555723510e36d6ea61f7bf45621adf4564.scope - libcontainer container 857328f91f54251b560e24d47e634c555723510e36d6ea61f7bf45621adf4564. Jun 20 18:51:53.031244 containerd[1726]: time="2025-06-20T18:51:53.030417648Z" level=info msg="StartContainer for \"857328f91f54251b560e24d47e634c555723510e36d6ea61f7bf45621adf4564\" returns successfully" Jun 20 18:51:54.041474 kubelet[3290]: I0620 18:51:54.041000 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tm7jb" podStartSLOduration=2.040979597 podStartE2EDuration="2.040979597s" podCreationTimestamp="2025-06-20 18:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:51:54.040947196 +0000 UTC m=+6.789906342" watchObservedRunningTime="2025-06-20 18:51:54.040979597 +0000 UTC m=+6.789938742" Jun 20 18:51:59.253401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687434281.mount: Deactivated successfully. Jun 20 18:52:01.475155 containerd[1726]: time="2025-06-20T18:52:01.475094883Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:52:01.477398 containerd[1726]: time="2025-06-20T18:52:01.477334625Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 18:52:01.481899 containerd[1726]: time="2025-06-20T18:52:01.481837010Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:52:01.483869 containerd[1726]: time="2025-06-20T18:52:01.483334838Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.586270508s" Jun 20 18:52:01.483869 containerd[1726]: time="2025-06-20T18:52:01.483378338Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 18:52:01.485177 containerd[1726]: time="2025-06-20T18:52:01.484975168Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:52:01.486486 containerd[1726]: time="2025-06-20T18:52:01.486450696Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:52:01.537353 containerd[1726]: time="2025-06-20T18:52:01.537304349Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\"" Jun 20 18:52:01.538235 containerd[1726]: time="2025-06-20T18:52:01.538027363Z" level=info msg="StartContainer for \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\"" Jun 20 18:52:01.572074 systemd[1]: Started cri-containerd-85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108.scope - libcontainer container 85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108. Jun 20 18:52:01.601032 containerd[1726]: time="2025-06-20T18:52:01.600867641Z" level=info msg="StartContainer for \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\" returns successfully" Jun 20 18:52:01.610813 systemd[1]: cri-containerd-85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108.scope: Deactivated successfully. Jun 20 18:52:02.518433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108-rootfs.mount: Deactivated successfully. Jun 20 18:52:05.526872 containerd[1726]: time="2025-06-20T18:52:05.526786332Z" level=info msg="shim disconnected" id=85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108 namespace=k8s.io Jun 20 18:52:05.526872 containerd[1726]: time="2025-06-20T18:52:05.526861034Z" level=warning msg="cleaning up after shim disconnected" id=85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108 namespace=k8s.io Jun 20 18:52:05.526872 containerd[1726]: time="2025-06-20T18:52:05.526873834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:52:06.057759 containerd[1726]: time="2025-06-20T18:52:06.057705984Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:52:06.113989 containerd[1726]: time="2025-06-20T18:52:06.113945339Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\"" Jun 20 18:52:06.114688 containerd[1726]: time="2025-06-20T18:52:06.114520849Z" level=info msg="StartContainer for \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\"" Jun 20 18:52:06.149069 systemd[1]: Started cri-containerd-f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da.scope - libcontainer container f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da. Jun 20 18:52:06.177130 containerd[1726]: time="2025-06-20T18:52:06.177081422Z" level=info msg="StartContainer for \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\" returns successfully" Jun 20 18:52:06.188826 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:52:06.189432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:06.189881 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:52:06.195200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:52:06.198396 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:52:06.199969 systemd[1]: cri-containerd-f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da.scope: Deactivated successfully. Jun 20 18:52:06.225833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:06.239584 containerd[1726]: time="2025-06-20T18:52:06.239502392Z" level=info msg="shim disconnected" id=f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da namespace=k8s.io Jun 20 18:52:06.239584 containerd[1726]: time="2025-06-20T18:52:06.239579094Z" level=warning msg="cleaning up after shim disconnected" id=f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da namespace=k8s.io Jun 20 18:52:06.239900 containerd[1726]: time="2025-06-20T18:52:06.239592294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:52:07.061126 containerd[1726]: time="2025-06-20T18:52:07.060937844Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:52:07.097438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da-rootfs.mount: Deactivated successfully. Jun 20 18:52:07.133604 containerd[1726]: time="2025-06-20T18:52:07.133556786Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\"" Jun 20 18:52:07.134345 containerd[1726]: time="2025-06-20T18:52:07.134138192Z" level=info msg="StartContainer for \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\"" Jun 20 18:52:07.174084 systemd[1]: Started cri-containerd-dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c.scope - libcontainer container dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c. Jun 20 18:52:07.229540 containerd[1726]: time="2025-06-20T18:52:07.229395366Z" level=info msg="StartContainer for \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\" returns successfully" Jun 20 18:52:07.234630 systemd[1]: cri-containerd-dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c.scope: Deactivated successfully. Jun 20 18:52:07.338669 containerd[1726]: time="2025-06-20T18:52:07.337345470Z" level=info msg="shim disconnected" id=dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c namespace=k8s.io Jun 20 18:52:07.338669 containerd[1726]: time="2025-06-20T18:52:07.337403271Z" level=warning msg="cleaning up after shim disconnected" id=dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c namespace=k8s.io Jun 20 18:52:07.338669 containerd[1726]: time="2025-06-20T18:52:07.337414271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:52:07.932708 containerd[1726]: time="2025-06-20T18:52:07.932659857Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:52:07.934842 containerd[1726]: time="2025-06-20T18:52:07.934794478Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 18:52:07.939259 containerd[1726]: time="2025-06-20T18:52:07.939205223Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:52:07.941110 containerd[1726]: time="2025-06-20T18:52:07.940483837Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.455470967s" Jun 20 18:52:07.941110 containerd[1726]: time="2025-06-20T18:52:07.940522437Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 18:52:07.942851 containerd[1726]: time="2025-06-20T18:52:07.942768960Z" level=info msg="CreateContainer within sandbox \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:52:07.978908 containerd[1726]: time="2025-06-20T18:52:07.978857929Z" level=info msg="CreateContainer within sandbox \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\"" Jun 20 18:52:07.979436 containerd[1726]: time="2025-06-20T18:52:07.979374234Z" level=info msg="StartContainer for \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\"" Jun 20 18:52:08.006079 systemd[1]: Started cri-containerd-208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68.scope - libcontainer container 208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68. Jun 20 18:52:08.034978 containerd[1726]: time="2025-06-20T18:52:08.034883002Z" level=info msg="StartContainer for \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\" returns successfully" Jun 20 18:52:08.076654 containerd[1726]: time="2025-06-20T18:52:08.076427026Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:52:08.109787 systemd[1]: run-containerd-runc-k8s.io-dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c-runc.Kuk9oR.mount: Deactivated successfully. Jun 20 18:52:08.109937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c-rootfs.mount: Deactivated successfully. Jun 20 18:52:08.143451 containerd[1726]: time="2025-06-20T18:52:08.143409911Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\"" Jun 20 18:52:08.146190 containerd[1726]: time="2025-06-20T18:52:08.145181729Z" level=info msg="StartContainer for \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\"" Jun 20 18:52:08.156582 kubelet[3290]: I0620 18:52:08.156507 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pg798" podStartSLOduration=1.148364014 podStartE2EDuration="16.156481845s" podCreationTimestamp="2025-06-20 18:51:52 +0000 UTC" firstStartedPulling="2025-06-20 18:51:52.933125013 +0000 UTC m=+5.682084058" lastFinishedPulling="2025-06-20 18:52:07.941242844 +0000 UTC m=+20.690201889" observedRunningTime="2025-06-20 18:52:08.100106969 +0000 UTC m=+20.849066114" watchObservedRunningTime="2025-06-20 18:52:08.156481845 +0000 UTC m=+20.905440890" Jun 20 18:52:08.210236 systemd[1]: Started cri-containerd-a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d.scope - libcontainer container a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d. Jun 20 18:52:08.258111 systemd[1]: cri-containerd-a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d.scope: Deactivated successfully. Jun 20 18:52:08.259713 containerd[1726]: time="2025-06-20T18:52:08.259502698Z" level=info msg="StartContainer for \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\" returns successfully" Jun 20 18:52:08.792330 containerd[1726]: time="2025-06-20T18:52:08.791882541Z" level=info msg="shim disconnected" id=a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d namespace=k8s.io Jun 20 18:52:08.792330 containerd[1726]: time="2025-06-20T18:52:08.792283945Z" level=warning msg="cleaning up after shim disconnected" id=a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d namespace=k8s.io Jun 20 18:52:08.792330 containerd[1726]: time="2025-06-20T18:52:08.792299546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:52:09.080200 containerd[1726]: time="2025-06-20T18:52:09.079331480Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:52:09.098066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d-rootfs.mount: Deactivated successfully. Jun 20 18:52:09.114850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521464036.mount: Deactivated successfully. Jun 20 18:52:09.123424 containerd[1726]: time="2025-06-20T18:52:09.123376931Z" level=info msg="CreateContainer within sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\"" Jun 20 18:52:09.125176 containerd[1726]: time="2025-06-20T18:52:09.124004037Z" level=info msg="StartContainer for \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\"" Jun 20 18:52:09.161166 systemd[1]: Started cri-containerd-cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2.scope - libcontainer container cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2. Jun 20 18:52:09.195938 containerd[1726]: time="2025-06-20T18:52:09.195859472Z" level=info msg="StartContainer for \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\" returns successfully" Jun 20 18:52:09.296186 kubelet[3290]: I0620 18:52:09.296124 3290 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 18:52:09.357970 systemd[1]: Created slice kubepods-burstable-pode6f294c7_0f53_42c4_bba2_02bc6062c453.slice - libcontainer container kubepods-burstable-pode6f294c7_0f53_42c4_bba2_02bc6062c453.slice. Jun 20 18:52:09.371907 systemd[1]: Created slice kubepods-burstable-podcdca4cdf_067e_4ec2_a791_785aaa462003.slice - libcontainer container kubepods-burstable-podcdca4cdf_067e_4ec2_a791_785aaa462003.slice. Jun 20 18:52:09.400560 kubelet[3290]: I0620 18:52:09.400265 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vq4q\" (UniqueName: \"kubernetes.io/projected/e6f294c7-0f53-42c4-bba2-02bc6062c453-kube-api-access-4vq4q\") pod \"coredns-7c65d6cfc9-cnw9l\" (UID: \"e6f294c7-0f53-42c4-bba2-02bc6062c453\") " pod="kube-system/coredns-7c65d6cfc9-cnw9l" Jun 20 18:52:09.400560 kubelet[3290]: I0620 18:52:09.400318 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdca4cdf-067e-4ec2-a791-785aaa462003-config-volume\") pod \"coredns-7c65d6cfc9-4pn2r\" (UID: \"cdca4cdf-067e-4ec2-a791-785aaa462003\") " pod="kube-system/coredns-7c65d6cfc9-4pn2r" Jun 20 18:52:09.400560 kubelet[3290]: I0620 18:52:09.400345 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6f294c7-0f53-42c4-bba2-02bc6062c453-config-volume\") pod \"coredns-7c65d6cfc9-cnw9l\" (UID: \"e6f294c7-0f53-42c4-bba2-02bc6062c453\") " pod="kube-system/coredns-7c65d6cfc9-cnw9l" Jun 20 18:52:09.400560 kubelet[3290]: I0620 18:52:09.400374 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rnqz\" (UniqueName: \"kubernetes.io/projected/cdca4cdf-067e-4ec2-a791-785aaa462003-kube-api-access-4rnqz\") pod \"coredns-7c65d6cfc9-4pn2r\" (UID: \"cdca4cdf-067e-4ec2-a791-785aaa462003\") " pod="kube-system/coredns-7c65d6cfc9-4pn2r" Jun 20 18:52:09.667258 containerd[1726]: time="2025-06-20T18:52:09.666635998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cnw9l,Uid:e6f294c7-0f53-42c4-bba2-02bc6062c453,Namespace:kube-system,Attempt:0,}" Jun 20 18:52:09.679058 containerd[1726]: time="2025-06-20T18:52:09.678800218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4pn2r,Uid:cdca4cdf-067e-4ec2-a791-785aaa462003,Namespace:kube-system,Attempt:0,}" Jun 20 18:52:10.102181 kubelet[3290]: I0620 18:52:10.102100 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9pjgb" podStartSLOduration=9.514152378 podStartE2EDuration="18.102059613s" podCreationTimestamp="2025-06-20 18:51:52 +0000 UTC" firstStartedPulling="2025-06-20 18:51:52.896560324 +0000 UTC m=+5.645519369" lastFinishedPulling="2025-06-20 18:52:01.484467559 +0000 UTC m=+14.233426604" observedRunningTime="2025-06-20 18:52:10.100514698 +0000 UTC m=+22.849473743" watchObservedRunningTime="2025-06-20 18:52:10.102059613 +0000 UTC m=+22.851018658" Jun 20 18:52:10.112634 systemd[1]: run-containerd-runc-k8s.io-cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2-runc.7Fzi8U.mount: Deactivated successfully. Jun 20 18:52:11.365410 systemd-networkd[1330]: cilium_host: Link UP Jun 20 18:52:11.365608 systemd-networkd[1330]: cilium_net: Link UP Jun 20 18:52:11.365793 systemd-networkd[1330]: cilium_net: Gained carrier Jun 20 18:52:11.367720 systemd-networkd[1330]: cilium_host: Gained carrier Jun 20 18:52:11.367912 systemd-networkd[1330]: cilium_net: Gained IPv6LL Jun 20 18:52:11.632104 systemd-networkd[1330]: cilium_vxlan: Link UP Jun 20 18:52:11.632113 systemd-networkd[1330]: cilium_vxlan: Gained carrier Jun 20 18:52:11.982137 kernel: NET: Registered PF_ALG protocol family Jun 20 18:52:12.201134 systemd-networkd[1330]: cilium_host: Gained IPv6LL Jun 20 18:52:12.805973 systemd-networkd[1330]: lxc_health: Link UP Jun 20 18:52:12.812198 systemd-networkd[1330]: lxc_health: Gained carrier Jun 20 18:52:13.275237 systemd-networkd[1330]: lxc2a02e215a5b9: Link UP Jun 20 18:52:13.290764 systemd-networkd[1330]: lxc5364dbcdc3d2: Link UP Jun 20 18:52:13.294024 kernel: eth0: renamed from tmp033e1 Jun 20 18:52:13.304297 kernel: eth0: renamed from tmpee171 Jun 20 18:52:13.308387 systemd-networkd[1330]: lxc2a02e215a5b9: Gained carrier Jun 20 18:52:13.308618 systemd-networkd[1330]: lxc5364dbcdc3d2: Gained carrier Jun 20 18:52:13.354062 systemd-networkd[1330]: cilium_vxlan: Gained IPv6LL Jun 20 18:52:14.443084 systemd-networkd[1330]: lxc2a02e215a5b9: Gained IPv6LL Jun 20 18:52:14.569134 systemd-networkd[1330]: lxc5364dbcdc3d2: Gained IPv6LL Jun 20 18:52:14.634822 systemd-networkd[1330]: lxc_health: Gained IPv6LL Jun 20 18:52:17.095991 containerd[1726]: time="2025-06-20T18:52:17.095355526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:52:17.095991 containerd[1726]: time="2025-06-20T18:52:17.095421427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:52:17.095991 containerd[1726]: time="2025-06-20T18:52:17.095438327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:52:17.095991 containerd[1726]: time="2025-06-20T18:52:17.095542528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:52:17.107952 containerd[1726]: time="2025-06-20T18:52:17.106134033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:52:17.107952 containerd[1726]: time="2025-06-20T18:52:17.106202134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:52:17.107952 containerd[1726]: time="2025-06-20T18:52:17.106221334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:52:17.107952 containerd[1726]: time="2025-06-20T18:52:17.106313935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:52:17.157134 systemd[1]: Started cri-containerd-033e19509896c361b25c680937ab22ed4631b930b5808393365a8f82b946c30e.scope - libcontainer container 033e19509896c361b25c680937ab22ed4631b930b5808393365a8f82b946c30e. Jun 20 18:52:17.160504 systemd[1]: Started cri-containerd-ee171c888d246b2d3b48f893090713c2fd1dfc5995820052874c3051f6ebf534.scope - libcontainer container ee171c888d246b2d3b48f893090713c2fd1dfc5995820052874c3051f6ebf534. Jun 20 18:52:17.244682 containerd[1726]: time="2025-06-20T18:52:17.244584205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4pn2r,Uid:cdca4cdf-067e-4ec2-a791-785aaa462003,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee171c888d246b2d3b48f893090713c2fd1dfc5995820052874c3051f6ebf534\"" Jun 20 18:52:17.257385 containerd[1726]: time="2025-06-20T18:52:17.257233931Z" level=info msg="CreateContainer within sandbox \"ee171c888d246b2d3b48f893090713c2fd1dfc5995820052874c3051f6ebf534\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:52:17.267300 containerd[1726]: time="2025-06-20T18:52:17.267224730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cnw9l,Uid:e6f294c7-0f53-42c4-bba2-02bc6062c453,Namespace:kube-system,Attempt:0,} returns sandbox id \"033e19509896c361b25c680937ab22ed4631b930b5808393365a8f82b946c30e\"" Jun 20 18:52:17.271529 containerd[1726]: time="2025-06-20T18:52:17.271083568Z" level=info msg="CreateContainer within sandbox \"033e19509896c361b25c680937ab22ed4631b930b5808393365a8f82b946c30e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:52:17.331553 containerd[1726]: time="2025-06-20T18:52:17.331366465Z" level=info msg="CreateContainer within sandbox \"ee171c888d246b2d3b48f893090713c2fd1dfc5995820052874c3051f6ebf534\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05f73049decea3945db90cac46f9c86a8c0ade7637ea5078fc0b55ccb1290c64\"" Jun 20 18:52:17.333097 containerd[1726]: time="2025-06-20T18:52:17.332011872Z" level=info msg="StartContainer for \"05f73049decea3945db90cac46f9c86a8c0ade7637ea5078fc0b55ccb1290c64\"" Jun 20 18:52:17.336751 containerd[1726]: time="2025-06-20T18:52:17.336185213Z" level=info msg="CreateContainer within sandbox \"033e19509896c361b25c680937ab22ed4631b930b5808393365a8f82b946c30e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d23e652a850ef2b5f7b9d19a497b893bea3db80d5e279ffa417f7ecd02f8582\"" Jun 20 18:52:17.338952 containerd[1726]: time="2025-06-20T18:52:17.338634537Z" level=info msg="StartContainer for \"9d23e652a850ef2b5f7b9d19a497b893bea3db80d5e279ffa417f7ecd02f8582\"" Jun 20 18:52:17.367293 systemd[1]: Started cri-containerd-05f73049decea3945db90cac46f9c86a8c0ade7637ea5078fc0b55ccb1290c64.scope - libcontainer container 05f73049decea3945db90cac46f9c86a8c0ade7637ea5078fc0b55ccb1290c64. Jun 20 18:52:17.378143 systemd[1]: Started cri-containerd-9d23e652a850ef2b5f7b9d19a497b893bea3db80d5e279ffa417f7ecd02f8582.scope - libcontainer container 9d23e652a850ef2b5f7b9d19a497b893bea3db80d5e279ffa417f7ecd02f8582. Jun 20 18:52:17.429505 containerd[1726]: time="2025-06-20T18:52:17.429451883Z" level=info msg="StartContainer for \"05f73049decea3945db90cac46f9c86a8c0ade7637ea5078fc0b55ccb1290c64\" returns successfully" Jun 20 18:52:17.429691 containerd[1726]: time="2025-06-20T18:52:17.429565184Z" level=info msg="StartContainer for \"9d23e652a850ef2b5f7b9d19a497b893bea3db80d5e279ffa417f7ecd02f8582\" returns successfully" Jun 20 18:52:18.111210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863143598.mount: Deactivated successfully. Jun 20 18:52:18.117405 kubelet[3290]: I0620 18:52:18.116797 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4pn2r" podStartSLOduration=26.116771027 podStartE2EDuration="26.116771027s" podCreationTimestamp="2025-06-20 18:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:52:18.114621909 +0000 UTC m=+30.863581054" watchObservedRunningTime="2025-06-20 18:52:18.116771027 +0000 UTC m=+30.865730072" Jun 20 18:52:18.134325 kubelet[3290]: I0620 18:52:18.134260 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cnw9l" podStartSLOduration=26.13423657 podStartE2EDuration="26.13423657s" podCreationTimestamp="2025-06-20 18:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:52:18.13305876 +0000 UTC m=+30.882017805" watchObservedRunningTime="2025-06-20 18:52:18.13423657 +0000 UTC m=+30.883195715" Jun 20 18:53:25.890228 systemd[1]: Started sshd@7-10.200.8.40:22-10.200.16.10:54778.service - OpenSSH per-connection server daemon (10.200.16.10:54778). Jun 20 18:53:26.518569 sshd[4689]: Accepted publickey for core from 10.200.16.10 port 54778 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:26.520113 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:26.524415 systemd-logind[1699]: New session 10 of user core. Jun 20 18:53:26.535108 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:53:27.035500 sshd[4691]: Connection closed by 10.200.16.10 port 54778 Jun 20 18:53:27.036447 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:27.040750 systemd[1]: sshd@7-10.200.8.40:22-10.200.16.10:54778.service: Deactivated successfully. Jun 20 18:53:27.043148 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:53:27.044021 systemd-logind[1699]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:53:27.044976 systemd-logind[1699]: Removed session 10. Jun 20 18:53:32.153256 systemd[1]: Started sshd@8-10.200.8.40:22-10.200.16.10:56846.service - OpenSSH per-connection server daemon (10.200.16.10:56846). Jun 20 18:53:32.778190 sshd[4704]: Accepted publickey for core from 10.200.16.10 port 56846 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:32.781846 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:32.795043 systemd-logind[1699]: New session 11 of user core. Jun 20 18:53:32.798125 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:53:33.283327 sshd[4706]: Connection closed by 10.200.16.10 port 56846 Jun 20 18:53:33.284265 sshd-session[4704]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:33.289273 systemd[1]: sshd@8-10.200.8.40:22-10.200.16.10:56846.service: Deactivated successfully. Jun 20 18:53:33.291987 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:53:33.293189 systemd-logind[1699]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:53:33.294415 systemd-logind[1699]: Removed session 11. Jun 20 18:53:38.406259 systemd[1]: Started sshd@9-10.200.8.40:22-10.200.16.10:56862.service - OpenSSH per-connection server daemon (10.200.16.10:56862). Jun 20 18:53:39.032765 sshd[4718]: Accepted publickey for core from 10.200.16.10 port 56862 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:39.034420 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:39.039568 systemd-logind[1699]: New session 12 of user core. Jun 20 18:53:39.045093 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:53:39.532306 sshd[4720]: Connection closed by 10.200.16.10 port 56862 Jun 20 18:53:39.533176 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:39.536652 systemd[1]: sshd@9-10.200.8.40:22-10.200.16.10:56862.service: Deactivated successfully. Jun 20 18:53:39.539372 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:53:39.541819 systemd-logind[1699]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:53:39.543074 systemd-logind[1699]: Removed session 12. Jun 20 18:53:44.651266 systemd[1]: Started sshd@10-10.200.8.40:22-10.200.16.10:56986.service - OpenSSH per-connection server daemon (10.200.16.10:56986). Jun 20 18:53:45.277795 sshd[4732]: Accepted publickey for core from 10.200.16.10 port 56986 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:45.279364 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:45.283893 systemd-logind[1699]: New session 13 of user core. Jun 20 18:53:45.289093 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:53:45.780222 sshd[4734]: Connection closed by 10.200.16.10 port 56986 Jun 20 18:53:45.780987 sshd-session[4732]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:45.784974 systemd[1]: sshd@10-10.200.8.40:22-10.200.16.10:56986.service: Deactivated successfully. Jun 20 18:53:45.787309 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:53:45.788218 systemd-logind[1699]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:53:45.789324 systemd-logind[1699]: Removed session 13. Jun 20 18:53:50.898223 systemd[1]: Started sshd@11-10.200.8.40:22-10.200.16.10:48378.service - OpenSSH per-connection server daemon (10.200.16.10:48378). Jun 20 18:53:51.521513 sshd[4749]: Accepted publickey for core from 10.200.16.10 port 48378 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:51.523148 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:51.527453 systemd-logind[1699]: New session 14 of user core. Jun 20 18:53:51.534224 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:53:52.062951 sshd[4751]: Connection closed by 10.200.16.10 port 48378 Jun 20 18:53:52.063698 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:52.066641 systemd[1]: sshd@11-10.200.8.40:22-10.200.16.10:48378.service: Deactivated successfully. Jun 20 18:53:52.068992 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:53:52.070874 systemd-logind[1699]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:53:52.072156 systemd-logind[1699]: Removed session 14. Jun 20 18:53:52.183533 systemd[1]: Started sshd@12-10.200.8.40:22-10.200.16.10:48380.service - OpenSSH per-connection server daemon (10.200.16.10:48380). Jun 20 18:53:52.808179 sshd[4764]: Accepted publickey for core from 10.200.16.10 port 48380 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:52.809671 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:52.815619 systemd-logind[1699]: New session 15 of user core. Jun 20 18:53:52.818338 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:53:53.341248 sshd[4766]: Connection closed by 10.200.16.10 port 48380 Jun 20 18:53:53.342157 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:53.345586 systemd[1]: sshd@12-10.200.8.40:22-10.200.16.10:48380.service: Deactivated successfully. Jun 20 18:53:53.348625 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:53:53.350778 systemd-logind[1699]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:53:53.352144 systemd-logind[1699]: Removed session 15. Jun 20 18:53:53.459179 systemd[1]: Started sshd@13-10.200.8.40:22-10.200.16.10:48392.service - OpenSSH per-connection server daemon (10.200.16.10:48392). Jun 20 18:53:54.088994 sshd[4778]: Accepted publickey for core from 10.200.16.10 port 48392 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:54.090449 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:54.094756 systemd-logind[1699]: New session 16 of user core. Jun 20 18:53:54.102063 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:53:54.587250 sshd[4780]: Connection closed by 10.200.16.10 port 48392 Jun 20 18:53:54.588126 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:54.592454 systemd[1]: sshd@13-10.200.8.40:22-10.200.16.10:48392.service: Deactivated successfully. Jun 20 18:53:54.595175 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:53:54.596760 systemd-logind[1699]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:53:54.597880 systemd-logind[1699]: Removed session 16. Jun 20 18:53:59.707227 systemd[1]: Started sshd@14-10.200.8.40:22-10.200.16.10:55676.service - OpenSSH per-connection server daemon (10.200.16.10:55676). Jun 20 18:54:00.333389 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 55676 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:00.334829 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:00.339221 systemd-logind[1699]: New session 17 of user core. Jun 20 18:54:00.345390 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:54:00.833754 sshd[4794]: Connection closed by 10.200.16.10 port 55676 Jun 20 18:54:00.834494 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:00.837587 systemd[1]: sshd@14-10.200.8.40:22-10.200.16.10:55676.service: Deactivated successfully. Jun 20 18:54:00.839955 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:54:00.841687 systemd-logind[1699]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:54:00.843584 systemd-logind[1699]: Removed session 17. Jun 20 18:54:00.949132 systemd[1]: Started sshd@15-10.200.8.40:22-10.200.16.10:55686.service - OpenSSH per-connection server daemon (10.200.16.10:55686). Jun 20 18:54:01.584115 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 55686 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:01.585745 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:01.590568 systemd-logind[1699]: New session 18 of user core. Jun 20 18:54:01.598111 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:54:02.153098 sshd[4807]: Connection closed by 10.200.16.10 port 55686 Jun 20 18:54:02.153998 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:02.158551 systemd[1]: sshd@15-10.200.8.40:22-10.200.16.10:55686.service: Deactivated successfully. Jun 20 18:54:02.161138 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:54:02.162890 systemd-logind[1699]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:54:02.163982 systemd-logind[1699]: Removed session 18. Jun 20 18:54:02.275311 systemd[1]: Started sshd@16-10.200.8.40:22-10.200.16.10:55692.service - OpenSSH per-connection server daemon (10.200.16.10:55692). Jun 20 18:54:02.902146 sshd[4817]: Accepted publickey for core from 10.200.16.10 port 55692 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:02.903745 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:02.908204 systemd-logind[1699]: New session 19 of user core. Jun 20 18:54:02.913078 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:54:04.903870 sshd[4819]: Connection closed by 10.200.16.10 port 55692 Jun 20 18:54:04.904801 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:04.908976 systemd[1]: sshd@16-10.200.8.40:22-10.200.16.10:55692.service: Deactivated successfully. Jun 20 18:54:04.911144 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:54:04.911906 systemd-logind[1699]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:54:04.913171 systemd-logind[1699]: Removed session 19. Jun 20 18:54:05.018247 systemd[1]: Started sshd@17-10.200.8.40:22-10.200.16.10:55696.service - OpenSSH per-connection server daemon (10.200.16.10:55696). Jun 20 18:54:05.644748 sshd[4836]: Accepted publickey for core from 10.200.16.10 port 55696 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:05.646473 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:05.651538 systemd-logind[1699]: New session 20 of user core. Jun 20 18:54:05.660076 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:54:06.243913 sshd[4838]: Connection closed by 10.200.16.10 port 55696 Jun 20 18:54:06.244790 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:06.248883 systemd[1]: sshd@17-10.200.8.40:22-10.200.16.10:55696.service: Deactivated successfully. Jun 20 18:54:06.251215 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:54:06.252183 systemd-logind[1699]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:54:06.253379 systemd-logind[1699]: Removed session 20. Jun 20 18:54:06.363223 systemd[1]: Started sshd@18-10.200.8.40:22-10.200.16.10:55698.service - OpenSSH per-connection server daemon (10.200.16.10:55698). Jun 20 18:54:06.988500 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 55698 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:06.990257 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:06.994973 systemd-logind[1699]: New session 21 of user core. Jun 20 18:54:07.000064 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:54:07.486142 sshd[4850]: Connection closed by 10.200.16.10 port 55698 Jun 20 18:54:07.486834 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:07.489882 systemd[1]: sshd@18-10.200.8.40:22-10.200.16.10:55698.service: Deactivated successfully. Jun 20 18:54:07.492277 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:54:07.493841 systemd-logind[1699]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:54:07.495254 systemd-logind[1699]: Removed session 21. Jun 20 18:54:12.602250 systemd[1]: Started sshd@19-10.200.8.40:22-10.200.16.10:60630.service - OpenSSH per-connection server daemon (10.200.16.10:60630). Jun 20 18:54:13.227884 sshd[4865]: Accepted publickey for core from 10.200.16.10 port 60630 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:13.229785 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:13.235053 systemd-logind[1699]: New session 22 of user core. Jun 20 18:54:13.241102 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:54:13.729062 sshd[4867]: Connection closed by 10.200.16.10 port 60630 Jun 20 18:54:13.729761 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:13.733574 systemd[1]: sshd@19-10.200.8.40:22-10.200.16.10:60630.service: Deactivated successfully. Jun 20 18:54:13.735885 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:54:13.736667 systemd-logind[1699]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:54:13.737854 systemd-logind[1699]: Removed session 22. Jun 20 18:54:18.846243 systemd[1]: Started sshd@20-10.200.8.40:22-10.200.16.10:39348.service - OpenSSH per-connection server daemon (10.200.16.10:39348). Jun 20 18:54:19.471508 sshd[4879]: Accepted publickey for core from 10.200.16.10 port 39348 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:19.473079 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:19.477378 systemd-logind[1699]: New session 23 of user core. Jun 20 18:54:19.487073 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:54:19.975383 sshd[4881]: Connection closed by 10.200.16.10 port 39348 Jun 20 18:54:19.976479 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:19.981902 systemd[1]: sshd@20-10.200.8.40:22-10.200.16.10:39348.service: Deactivated successfully. Jun 20 18:54:19.984996 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:54:19.988230 systemd-logind[1699]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:54:19.990107 systemd-logind[1699]: Removed session 23. Jun 20 18:54:25.093212 systemd[1]: Started sshd@21-10.200.8.40:22-10.200.16.10:39358.service - OpenSSH per-connection server daemon (10.200.16.10:39358). Jun 20 18:54:25.718141 sshd[4895]: Accepted publickey for core from 10.200.16.10 port 39358 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:25.719542 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:25.723736 systemd-logind[1699]: New session 24 of user core. Jun 20 18:54:25.732078 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:54:26.219029 sshd[4897]: Connection closed by 10.200.16.10 port 39358 Jun 20 18:54:26.219849 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:26.224564 systemd[1]: sshd@21-10.200.8.40:22-10.200.16.10:39358.service: Deactivated successfully. Jun 20 18:54:26.227093 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:54:26.227851 systemd-logind[1699]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:54:26.228822 systemd-logind[1699]: Removed session 24. Jun 20 18:54:26.334235 systemd[1]: Started sshd@22-10.200.8.40:22-10.200.16.10:39360.service - OpenSSH per-connection server daemon (10.200.16.10:39360). Jun 20 18:54:26.959651 sshd[4909]: Accepted publickey for core from 10.200.16.10 port 39360 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:26.961128 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:26.965825 systemd-logind[1699]: New session 25 of user core. Jun 20 18:54:26.971077 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:54:28.615565 containerd[1726]: time="2025-06-20T18:54:28.615511475Z" level=info msg="StopContainer for \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\" with timeout 30 (s)" Jun 20 18:54:28.617000 containerd[1726]: time="2025-06-20T18:54:28.616773391Z" level=info msg="Stop container \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\" with signal terminated" Jun 20 18:54:28.634853 systemd[1]: cri-containerd-208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68.scope: Deactivated successfully. Jun 20 18:54:28.645219 containerd[1726]: time="2025-06-20T18:54:28.645166359Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:54:28.655037 containerd[1726]: time="2025-06-20T18:54:28.654813384Z" level=info msg="StopContainer for \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\" with timeout 2 (s)" Jun 20 18:54:28.655187 containerd[1726]: time="2025-06-20T18:54:28.655170189Z" level=info msg="Stop container \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\" with signal terminated" Jun 20 18:54:28.665827 systemd-networkd[1330]: lxc_health: Link DOWN Jun 20 18:54:28.665851 systemd-networkd[1330]: lxc_health: Lost carrier Jun 20 18:54:28.677040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68-rootfs.mount: Deactivated successfully. Jun 20 18:54:28.686869 systemd[1]: cri-containerd-cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2.scope: Deactivated successfully. Jun 20 18:54:28.687397 systemd[1]: cri-containerd-cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2.scope: Consumed 7.274s CPU time, 124.9M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 18:54:28.708681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2-rootfs.mount: Deactivated successfully. Jun 20 18:54:28.748523 containerd[1726]: time="2025-06-20T18:54:28.748270894Z" level=info msg="shim disconnected" id=cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2 namespace=k8s.io Jun 20 18:54:28.749226 containerd[1726]: time="2025-06-20T18:54:28.748534098Z" level=warning msg="cleaning up after shim disconnected" id=cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2 namespace=k8s.io Jun 20 18:54:28.749226 containerd[1726]: time="2025-06-20T18:54:28.748550398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:28.749226 containerd[1726]: time="2025-06-20T18:54:28.748376696Z" level=info msg="shim disconnected" id=208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68 namespace=k8s.io Jun 20 18:54:28.749226 containerd[1726]: time="2025-06-20T18:54:28.748629099Z" level=warning msg="cleaning up after shim disconnected" id=208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68 namespace=k8s.io Jun 20 18:54:28.749226 containerd[1726]: time="2025-06-20T18:54:28.748637599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:28.768977 containerd[1726]: time="2025-06-20T18:54:28.768329954Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:54:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:54:28.774399 containerd[1726]: time="2025-06-20T18:54:28.774353632Z" level=info msg="StopContainer for \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\" returns successfully" Jun 20 18:54:28.774618 containerd[1726]: time="2025-06-20T18:54:28.774485134Z" level=info msg="StopContainer for \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\" returns successfully" Jun 20 18:54:28.775168 containerd[1726]: time="2025-06-20T18:54:28.775138642Z" level=info msg="StopPodSandbox for \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\"" Jun 20 18:54:28.775327 containerd[1726]: time="2025-06-20T18:54:28.775185643Z" level=info msg="Container to stop \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:54:28.775327 containerd[1726]: time="2025-06-20T18:54:28.775229143Z" level=info msg="Container to stop \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:54:28.775327 containerd[1726]: time="2025-06-20T18:54:28.775245744Z" level=info msg="Container to stop \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:54:28.775327 containerd[1726]: time="2025-06-20T18:54:28.775258844Z" level=info msg="Container to stop \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:54:28.775327 containerd[1726]: time="2025-06-20T18:54:28.775270344Z" level=info msg="Container to stop \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:54:28.777004 containerd[1726]: time="2025-06-20T18:54:28.775233844Z" level=info msg="StopPodSandbox for \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\"" Jun 20 18:54:28.777131 containerd[1726]: time="2025-06-20T18:54:28.777018467Z" level=info msg="Container to stop \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:54:28.778610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761-shm.mount: Deactivated successfully. Jun 20 18:54:28.784558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3-shm.mount: Deactivated successfully. Jun 20 18:54:28.789397 systemd[1]: cri-containerd-c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761.scope: Deactivated successfully. Jun 20 18:54:28.798360 systemd[1]: cri-containerd-2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3.scope: Deactivated successfully. Jun 20 18:54:28.838151 containerd[1726]: time="2025-06-20T18:54:28.838082957Z" level=info msg="shim disconnected" id=c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761 namespace=k8s.io Jun 20 18:54:28.838479 containerd[1726]: time="2025-06-20T18:54:28.838439262Z" level=warning msg="cleaning up after shim disconnected" id=c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761 namespace=k8s.io Jun 20 18:54:28.838479 containerd[1726]: time="2025-06-20T18:54:28.838477263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:28.839525 containerd[1726]: time="2025-06-20T18:54:28.839470575Z" level=info msg="shim disconnected" id=2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3 namespace=k8s.io Jun 20 18:54:28.840441 containerd[1726]: time="2025-06-20T18:54:28.840414188Z" level=warning msg="cleaning up after shim disconnected" id=2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3 namespace=k8s.io Jun 20 18:54:28.840570 containerd[1726]: time="2025-06-20T18:54:28.840553489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:28.858906 containerd[1726]: time="2025-06-20T18:54:28.858867127Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:54:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:54:28.859204 containerd[1726]: time="2025-06-20T18:54:28.859103730Z" level=info msg="TearDown network for sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" successfully" Jun 20 18:54:28.859204 containerd[1726]: time="2025-06-20T18:54:28.859128130Z" level=info msg="StopPodSandbox for \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" returns successfully" Jun 20 18:54:28.860863 containerd[1726]: time="2025-06-20T18:54:28.860828652Z" level=info msg="TearDown network for sandbox \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" successfully" Jun 20 18:54:28.860863 containerd[1726]: time="2025-06-20T18:54:28.860859952Z" level=info msg="StopPodSandbox for \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" returns successfully" Jun 20 18:54:28.971808 kubelet[3290]: I0620 18:54:28.971386 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-clustermesh-secrets\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.971808 kubelet[3290]: I0620 18:54:28.971443 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-run\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.971808 kubelet[3290]: I0620 18:54:28.971480 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t6l9\" (UniqueName: \"kubernetes.io/projected/9c8a9c4e-bd33-4599-9178-85279d02aade-kube-api-access-4t6l9\") pod \"9c8a9c4e-bd33-4599-9178-85279d02aade\" (UID: \"9c8a9c4e-bd33-4599-9178-85279d02aade\") " Jun 20 18:54:28.971808 kubelet[3290]: I0620 18:54:28.971509 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-net\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.971808 kubelet[3290]: I0620 18:54:28.971537 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-config-path\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.971808 kubelet[3290]: I0620 18:54:28.971561 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfq5w\" (UniqueName: \"kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-kube-api-access-rfq5w\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972564 kubelet[3290]: I0620 18:54:28.971583 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-lib-modules\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972564 kubelet[3290]: I0620 18:54:28.971608 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-etc-cni-netd\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972564 kubelet[3290]: I0620 18:54:28.971646 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hostproc\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972564 kubelet[3290]: I0620 18:54:28.971669 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-cgroup\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972564 kubelet[3290]: I0620 18:54:28.971694 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-xtables-lock\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972564 kubelet[3290]: I0620 18:54:28.971717 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hubble-tls\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972827 kubelet[3290]: I0620 18:54:28.971737 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cni-path\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972827 kubelet[3290]: I0620 18:54:28.971758 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-bpf-maps\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972827 kubelet[3290]: I0620 18:54:28.971781 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c8a9c4e-bd33-4599-9178-85279d02aade-cilium-config-path\") pod \"9c8a9c4e-bd33-4599-9178-85279d02aade\" (UID: \"9c8a9c4e-bd33-4599-9178-85279d02aade\") " Jun 20 18:54:28.972827 kubelet[3290]: I0620 18:54:28.971803 3290 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-kernel\") pod \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\" (UID: \"31dd5f32-f5dc-4042-97d6-b0f7837b8c76\") " Jun 20 18:54:28.972827 kubelet[3290]: I0620 18:54:28.971859 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.976457 kubelet[3290]: I0620 18:54:28.971908 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.976457 kubelet[3290]: I0620 18:54:28.974214 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.976457 kubelet[3290]: I0620 18:54:28.975788 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.976457 kubelet[3290]: I0620 18:54:28.975842 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.976457 kubelet[3290]: I0620 18:54:28.975863 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hostproc" (OuterVolumeSpecName: "hostproc") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.976656 kubelet[3290]: I0620 18:54:28.975881 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.976656 kubelet[3290]: I0620 18:54:28.975901 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.978107 kubelet[3290]: I0620 18:54:28.978074 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cni-path" (OuterVolumeSpecName: "cni-path") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.978222 kubelet[3290]: I0620 18:54:28.978160 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 18:54:28.981078 kubelet[3290]: I0620 18:54:28.981045 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8a9c4e-bd33-4599-9178-85279d02aade-kube-api-access-4t6l9" (OuterVolumeSpecName: "kube-api-access-4t6l9") pod "9c8a9c4e-bd33-4599-9178-85279d02aade" (UID: "9c8a9c4e-bd33-4599-9178-85279d02aade"). InnerVolumeSpecName "kube-api-access-4t6l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 18:54:28.983321 kubelet[3290]: I0620 18:54:28.983274 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-kube-api-access-rfq5w" (OuterVolumeSpecName: "kube-api-access-rfq5w") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "kube-api-access-rfq5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 18:54:28.984068 kubelet[3290]: I0620 18:54:28.984038 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 18:54:28.984207 kubelet[3290]: I0620 18:54:28.984187 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 18:54:28.984342 kubelet[3290]: I0620 18:54:28.984326 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "31dd5f32-f5dc-4042-97d6-b0f7837b8c76" (UID: "31dd5f32-f5dc-4042-97d6-b0f7837b8c76"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 18:54:28.984552 kubelet[3290]: I0620 18:54:28.984529 3290 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8a9c4e-bd33-4599-9178-85279d02aade-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c8a9c4e-bd33-4599-9178-85279d02aade" (UID: "9c8a9c4e-bd33-4599-9178-85279d02aade"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 18:54:29.072999 kubelet[3290]: I0620 18:54:29.072944 3290 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-kernel\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.072999 kubelet[3290]: I0620 18:54:29.072984 3290 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-bpf-maps\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.072999 kubelet[3290]: I0620 18:54:29.073002 3290 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c8a9c4e-bd33-4599-9178-85279d02aade-cilium-config-path\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073018 3290 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-host-proc-sys-net\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073032 3290 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-clustermesh-secrets\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073043 3290 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-run\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073057 3290 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t6l9\" (UniqueName: \"kubernetes.io/projected/9c8a9c4e-bd33-4599-9178-85279d02aade-kube-api-access-4t6l9\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073067 3290 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-config-path\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073078 3290 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfq5w\" (UniqueName: \"kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-kube-api-access-rfq5w\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073088 3290 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-lib-modules\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073242 kubelet[3290]: I0620 18:54:29.073098 3290 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-etc-cni-netd\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073433 kubelet[3290]: I0620 18:54:29.073107 3290 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hostproc\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073433 kubelet[3290]: I0620 18:54:29.073119 3290 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cilium-cgroup\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073433 kubelet[3290]: I0620 18:54:29.073130 3290 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-xtables-lock\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073433 kubelet[3290]: I0620 18:54:29.073140 3290 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-hubble-tls\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.073433 kubelet[3290]: I0620 18:54:29.073150 3290 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31dd5f32-f5dc-4042-97d6-b0f7837b8c76-cni-path\") on node \"ci-4230.2.0-a-e7ad40a4c3\" DevicePath \"\"" Jun 20 18:54:29.371534 kubelet[3290]: I0620 18:54:29.371373 3290 scope.go:117] "RemoveContainer" containerID="208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68" Jun 20 18:54:29.375836 containerd[1726]: time="2025-06-20T18:54:29.375365915Z" level=info msg="RemoveContainer for \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\"" Jun 20 18:54:29.379904 systemd[1]: Removed slice kubepods-besteffort-pod9c8a9c4e_bd33_4599_9178_85279d02aade.slice - libcontainer container kubepods-besteffort-pod9c8a9c4e_bd33_4599_9178_85279d02aade.slice. Jun 20 18:54:29.384875 containerd[1726]: time="2025-06-20T18:54:29.384842138Z" level=info msg="RemoveContainer for \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\" returns successfully" Jun 20 18:54:29.385277 kubelet[3290]: I0620 18:54:29.385253 3290 scope.go:117] "RemoveContainer" containerID="208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68" Jun 20 18:54:29.386357 containerd[1726]: time="2025-06-20T18:54:29.386240356Z" level=error msg="ContainerStatus for \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\": not found" Jun 20 18:54:29.386700 kubelet[3290]: E0620 18:54:29.386508 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\": not found" containerID="208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68" Jun 20 18:54:29.386894 kubelet[3290]: I0620 18:54:29.386641 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68"} err="failed to get container status \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\": rpc error: code = NotFound desc = an error occurred when try to find container \"208b378ddaa75997ba2844910603f24471d994c6476205c464a75eaf62bfec68\": not found" Jun 20 18:54:29.387095 kubelet[3290]: I0620 18:54:29.386967 3290 scope.go:117] "RemoveContainer" containerID="cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2" Jun 20 18:54:29.389070 containerd[1726]: time="2025-06-20T18:54:29.388990992Z" level=info msg="RemoveContainer for \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\"" Jun 20 18:54:29.391140 systemd[1]: Removed slice kubepods-burstable-pod31dd5f32_f5dc_4042_97d6_b0f7837b8c76.slice - libcontainer container kubepods-burstable-pod31dd5f32_f5dc_4042_97d6_b0f7837b8c76.slice. Jun 20 18:54:29.391634 systemd[1]: kubepods-burstable-pod31dd5f32_f5dc_4042_97d6_b0f7837b8c76.slice: Consumed 7.365s CPU time, 125.3M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 18:54:29.401683 containerd[1726]: time="2025-06-20T18:54:29.401195250Z" level=info msg="RemoveContainer for \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\" returns successfully" Jun 20 18:54:29.402679 kubelet[3290]: I0620 18:54:29.402417 3290 scope.go:117] "RemoveContainer" containerID="a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d" Jun 20 18:54:29.404476 containerd[1726]: time="2025-06-20T18:54:29.404164888Z" level=info msg="RemoveContainer for \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\"" Jun 20 18:54:29.422781 containerd[1726]: time="2025-06-20T18:54:29.422733229Z" level=info msg="RemoveContainer for \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\" returns successfully" Jun 20 18:54:29.423287 kubelet[3290]: I0620 18:54:29.423203 3290 scope.go:117] "RemoveContainer" containerID="dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c" Jun 20 18:54:29.424571 containerd[1726]: time="2025-06-20T18:54:29.424531552Z" level=info msg="RemoveContainer for \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\"" Jun 20 18:54:29.437276 containerd[1726]: time="2025-06-20T18:54:29.437234516Z" level=info msg="RemoveContainer for \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\" returns successfully" Jun 20 18:54:29.437473 kubelet[3290]: I0620 18:54:29.437452 3290 scope.go:117] "RemoveContainer" containerID="f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da" Jun 20 18:54:29.438637 containerd[1726]: time="2025-06-20T18:54:29.438602834Z" level=info msg="RemoveContainer for \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\"" Jun 20 18:54:29.449316 containerd[1726]: time="2025-06-20T18:54:29.449285672Z" level=info msg="RemoveContainer for \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\" returns successfully" Jun 20 18:54:29.449611 kubelet[3290]: I0620 18:54:29.449464 3290 scope.go:117] "RemoveContainer" containerID="85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108" Jun 20 18:54:29.453155 containerd[1726]: time="2025-06-20T18:54:29.453128322Z" level=info msg="RemoveContainer for \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\"" Jun 20 18:54:29.461058 containerd[1726]: time="2025-06-20T18:54:29.461030625Z" level=info msg="RemoveContainer for \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\" returns successfully" Jun 20 18:54:29.461217 kubelet[3290]: I0620 18:54:29.461193 3290 scope.go:117] "RemoveContainer" containerID="cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2" Jun 20 18:54:29.461437 containerd[1726]: time="2025-06-20T18:54:29.461382529Z" level=error msg="ContainerStatus for \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\": not found" Jun 20 18:54:29.461624 kubelet[3290]: E0620 18:54:29.461567 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\": not found" containerID="cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2" Jun 20 18:54:29.461624 kubelet[3290]: I0620 18:54:29.461608 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2"} err="failed to get container status \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfb530a7543b86237028a22d5c44d01a477dab937e502ba9f9552066b8a9caf2\": not found" Jun 20 18:54:29.461763 kubelet[3290]: I0620 18:54:29.461640 3290 scope.go:117] "RemoveContainer" containerID="a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d" Jun 20 18:54:29.461872 containerd[1726]: time="2025-06-20T18:54:29.461836535Z" level=error msg="ContainerStatus for \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\": not found" Jun 20 18:54:29.462401 kubelet[3290]: E0620 18:54:29.462121 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\": not found" containerID="a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d" Jun 20 18:54:29.462401 kubelet[3290]: I0620 18:54:29.462151 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d"} err="failed to get container status \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8549ecbe3008feae4d7c1538182063f8d703719795279b2ec56c353a9849b4d\": not found" Jun 20 18:54:29.462401 kubelet[3290]: I0620 18:54:29.462174 3290 scope.go:117] "RemoveContainer" containerID="dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c" Jun 20 18:54:29.462564 containerd[1726]: time="2025-06-20T18:54:29.462339542Z" level=error msg="ContainerStatus for \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\": not found" Jun 20 18:54:29.462682 kubelet[3290]: E0620 18:54:29.462654 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\": not found" containerID="dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c" Jun 20 18:54:29.462752 kubelet[3290]: I0620 18:54:29.462686 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c"} err="failed to get container status \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"dddb044cb7a1d43e14bd3eee8bc1b3bf2580456e70f75be8b62750b2aab7bd5c\": not found" Jun 20 18:54:29.462752 kubelet[3290]: I0620 18:54:29.462708 3290 scope.go:117] "RemoveContainer" containerID="f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da" Jun 20 18:54:29.462914 containerd[1726]: time="2025-06-20T18:54:29.462877049Z" level=error msg="ContainerStatus for \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\": not found" Jun 20 18:54:29.463104 kubelet[3290]: E0620 18:54:29.463023 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\": not found" containerID="f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da" Jun 20 18:54:29.463104 kubelet[3290]: I0620 18:54:29.463052 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da"} err="failed to get container status \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0e382ab139b6a42b1f5f79aa5d6d804e3196e5b7100140350c4d9d46b2b24da\": not found" Jun 20 18:54:29.463104 kubelet[3290]: I0620 18:54:29.463073 3290 scope.go:117] "RemoveContainer" containerID="85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108" Jun 20 18:54:29.463322 containerd[1726]: time="2025-06-20T18:54:29.463233353Z" level=error msg="ContainerStatus for \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\": not found" Jun 20 18:54:29.463407 kubelet[3290]: E0620 18:54:29.463352 3290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\": not found" containerID="85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108" Jun 20 18:54:29.463407 kubelet[3290]: I0620 18:54:29.463376 3290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108"} err="failed to get container status \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\": rpc error: code = NotFound desc = an error occurred when try to find container \"85beca8072371cc1c613cc861d67ae5a6021210034829442e005dfa099bfc108\": not found" Jun 20 18:54:29.623616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3-rootfs.mount: Deactivated successfully. Jun 20 18:54:29.623757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761-rootfs.mount: Deactivated successfully. Jun 20 18:54:29.623846 systemd[1]: var-lib-kubelet-pods-9c8a9c4e\x2dbd33\x2d4599\x2d9178\x2d85279d02aade-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4t6l9.mount: Deactivated successfully. Jun 20 18:54:29.623948 systemd[1]: var-lib-kubelet-pods-31dd5f32\x2df5dc\x2d4042\x2d97d6\x2db0f7837b8c76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drfq5w.mount: Deactivated successfully. Jun 20 18:54:29.624043 systemd[1]: var-lib-kubelet-pods-31dd5f32\x2df5dc\x2d4042\x2d97d6\x2db0f7837b8c76-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:54:29.624125 systemd[1]: var-lib-kubelet-pods-31dd5f32\x2df5dc\x2d4042\x2d97d6\x2db0f7837b8c76-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:54:29.958882 kubelet[3290]: I0620 18:54:29.958755 3290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31dd5f32-f5dc-4042-97d6-b0f7837b8c76" path="/var/lib/kubelet/pods/31dd5f32-f5dc-4042-97d6-b0f7837b8c76/volumes" Jun 20 18:54:29.959542 kubelet[3290]: I0620 18:54:29.959508 3290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8a9c4e-bd33-4599-9178-85279d02aade" path="/var/lib/kubelet/pods/9c8a9c4e-bd33-4599-9178-85279d02aade/volumes" Jun 20 18:54:30.652566 sshd[4911]: Connection closed by 10.200.16.10 port 39360 Jun 20 18:54:30.653649 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:30.658475 systemd[1]: sshd@22-10.200.8.40:22-10.200.16.10:39360.service: Deactivated successfully. Jun 20 18:54:30.660495 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:54:30.661517 systemd-logind[1699]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:54:30.662672 systemd-logind[1699]: Removed session 25. Jun 20 18:54:30.769259 systemd[1]: Started sshd@23-10.200.8.40:22-10.200.16.10:43922.service - OpenSSH per-connection server daemon (10.200.16.10:43922). Jun 20 18:54:31.393120 sshd[5074]: Accepted publickey for core from 10.200.16.10 port 43922 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:31.394772 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:31.399372 systemd-logind[1699]: New session 26 of user core. Jun 20 18:54:31.406086 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:54:32.560448 kubelet[3290]: E0620 18:54:32.559717 3290 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31dd5f32-f5dc-4042-97d6-b0f7837b8c76" containerName="mount-cgroup" Jun 20 18:54:32.560448 kubelet[3290]: E0620 18:54:32.559762 3290 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31dd5f32-f5dc-4042-97d6-b0f7837b8c76" containerName="apply-sysctl-overwrites" Jun 20 18:54:32.560448 kubelet[3290]: E0620 18:54:32.559775 3290 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c8a9c4e-bd33-4599-9178-85279d02aade" containerName="cilium-operator" Jun 20 18:54:32.560448 kubelet[3290]: E0620 18:54:32.559783 3290 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31dd5f32-f5dc-4042-97d6-b0f7837b8c76" containerName="clean-cilium-state" Jun 20 18:54:32.560448 kubelet[3290]: E0620 18:54:32.559792 3290 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31dd5f32-f5dc-4042-97d6-b0f7837b8c76" containerName="cilium-agent" Jun 20 18:54:32.560448 kubelet[3290]: E0620 18:54:32.559801 3290 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31dd5f32-f5dc-4042-97d6-b0f7837b8c76" containerName="mount-bpf-fs" Jun 20 18:54:32.560448 kubelet[3290]: I0620 18:54:32.559832 3290 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8a9c4e-bd33-4599-9178-85279d02aade" containerName="cilium-operator" Jun 20 18:54:32.560448 kubelet[3290]: I0620 18:54:32.559843 3290 memory_manager.go:354] "RemoveStaleState removing state" podUID="31dd5f32-f5dc-4042-97d6-b0f7837b8c76" containerName="cilium-agent" Jun 20 18:54:32.573302 systemd[1]: Created slice kubepods-burstable-pod5e4a43c5_3a6a_44f3_ad96_5c2d336be30f.slice - libcontainer container kubepods-burstable-pod5e4a43c5_3a6a_44f3_ad96_5c2d336be30f.slice. Jun 20 18:54:32.592431 kubelet[3290]: I0620 18:54:32.591910 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-host-proc-sys-kernel\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592431 kubelet[3290]: I0620 18:54:32.591978 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-cilium-run\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592431 kubelet[3290]: I0620 18:54:32.592003 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-bpf-maps\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592431 kubelet[3290]: I0620 18:54:32.592027 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-etc-cni-netd\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592431 kubelet[3290]: I0620 18:54:32.592051 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-clustermesh-secrets\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592431 kubelet[3290]: I0620 18:54:32.592074 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-xtables-lock\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592794 kubelet[3290]: I0620 18:54:32.592096 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-hubble-tls\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592794 kubelet[3290]: I0620 18:54:32.592120 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-cni-path\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592794 kubelet[3290]: I0620 18:54:32.592142 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-cilium-config-path\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592794 kubelet[3290]: I0620 18:54:32.592176 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nhc\" (UniqueName: \"kubernetes.io/projected/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-kube-api-access-f7nhc\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.592794 kubelet[3290]: I0620 18:54:32.592203 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-cilium-ipsec-secrets\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.593022 kubelet[3290]: I0620 18:54:32.592229 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-host-proc-sys-net\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.593022 kubelet[3290]: I0620 18:54:32.592255 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-hostproc\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.593022 kubelet[3290]: I0620 18:54:32.592278 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-cilium-cgroup\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.593022 kubelet[3290]: I0620 18:54:32.592299 3290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e4a43c5-3a6a-44f3-ad96-5c2d336be30f-lib-modules\") pod \"cilium-zb4j9\" (UID: \"5e4a43c5-3a6a-44f3-ad96-5c2d336be30f\") " pod="kube-system/cilium-zb4j9" Jun 20 18:54:32.629986 sshd[5076]: Connection closed by 10.200.16.10 port 43922 Jun 20 18:54:32.632090 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:32.641792 systemd[1]: sshd@23-10.200.8.40:22-10.200.16.10:43922.service: Deactivated successfully. Jun 20 18:54:32.648873 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:54:32.652800 systemd-logind[1699]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:54:32.655318 systemd-logind[1699]: Removed session 26. Jun 20 18:54:32.748533 systemd[1]: Started sshd@24-10.200.8.40:22-10.200.16.10:43924.service - OpenSSH per-connection server daemon (10.200.16.10:43924). Jun 20 18:54:32.881235 containerd[1726]: time="2025-06-20T18:54:32.881092019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zb4j9,Uid:5e4a43c5-3a6a-44f3-ad96-5c2d336be30f,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:32.941943 containerd[1726]: time="2025-06-20T18:54:32.941651893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:32.941943 containerd[1726]: time="2025-06-20T18:54:32.941726093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:32.941943 containerd[1726]: time="2025-06-20T18:54:32.941748194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:32.944212 containerd[1726]: time="2025-06-20T18:54:32.942346298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:32.972067 systemd[1]: Started cri-containerd-4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb.scope - libcontainer container 4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb. Jun 20 18:54:32.994234 containerd[1726]: time="2025-06-20T18:54:32.994193004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zb4j9,Uid:5e4a43c5-3a6a-44f3-ad96-5c2d336be30f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\"" Jun 20 18:54:32.997779 containerd[1726]: time="2025-06-20T18:54:32.997615530Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:54:33.056774 containerd[1726]: time="2025-06-20T18:54:33.056713993Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a\"" Jun 20 18:54:33.058210 containerd[1726]: time="2025-06-20T18:54:33.057353698Z" level=info msg="StartContainer for \"ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a\"" Jun 20 18:54:33.084947 kubelet[3290]: E0620 18:54:33.084886 3290 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:54:33.085127 systemd[1]: Started cri-containerd-ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a.scope - libcontainer container ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a. Jun 20 18:54:33.117945 containerd[1726]: time="2025-06-20T18:54:33.116199258Z" level=info msg="StartContainer for \"ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a\" returns successfully" Jun 20 18:54:33.123020 systemd[1]: cri-containerd-ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a.scope: Deactivated successfully. Jun 20 18:54:33.194291 containerd[1726]: time="2025-06-20T18:54:33.193962266Z" level=info msg="shim disconnected" id=ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a namespace=k8s.io Jun 20 18:54:33.194291 containerd[1726]: time="2025-06-20T18:54:33.194070767Z" level=warning msg="cleaning up after shim disconnected" id=ceae33e018a695b8af0a53e4b750e958d11d32cd6d1d1ff9d12c668551f9f58a namespace=k8s.io Jun 20 18:54:33.194291 containerd[1726]: time="2025-06-20T18:54:33.194089067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:33.373955 sshd[5092]: Accepted publickey for core from 10.200.16.10 port 43924 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:33.375436 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:33.380046 systemd-logind[1699]: New session 27 of user core. Jun 20 18:54:33.389146 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:54:33.398690 containerd[1726]: time="2025-06-20T18:54:33.398548366Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:54:33.444188 containerd[1726]: time="2025-06-20T18:54:33.444135222Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a\"" Jun 20 18:54:33.444934 containerd[1726]: time="2025-06-20T18:54:33.444737127Z" level=info msg="StartContainer for \"a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a\"" Jun 20 18:54:33.477099 systemd[1]: Started cri-containerd-a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a.scope - libcontainer container a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a. Jun 20 18:54:33.508749 systemd[1]: cri-containerd-a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a.scope: Deactivated successfully. Jun 20 18:54:33.509161 containerd[1726]: time="2025-06-20T18:54:33.509095730Z" level=info msg="StartContainer for \"a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a\" returns successfully" Jun 20 18:54:33.546982 containerd[1726]: time="2025-06-20T18:54:33.546895326Z" level=info msg="shim disconnected" id=a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a namespace=k8s.io Jun 20 18:54:33.546982 containerd[1726]: time="2025-06-20T18:54:33.546977427Z" level=warning msg="cleaning up after shim disconnected" id=a10daaeabf32a6dacfe21ba7e241bbae79833159efda346bbbf81bcfa3c25a7a namespace=k8s.io Jun 20 18:54:33.546982 containerd[1726]: time="2025-06-20T18:54:33.546989227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:33.814858 sshd[5194]: Connection closed by 10.200.16.10 port 43924 Jun 20 18:54:33.815766 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:33.819441 systemd[1]: sshd@24-10.200.8.40:22-10.200.16.10:43924.service: Deactivated successfully. Jun 20 18:54:33.822154 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:54:33.824031 systemd-logind[1699]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:54:33.825115 systemd-logind[1699]: Removed session 27. Jun 20 18:54:33.934286 systemd[1]: Started sshd@25-10.200.8.40:22-10.200.16.10:43940.service - OpenSSH per-connection server daemon (10.200.16.10:43940). Jun 20 18:54:34.400350 containerd[1726]: time="2025-06-20T18:54:34.400298291Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:54:34.443132 containerd[1726]: time="2025-06-20T18:54:34.443077896Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a\"" Jun 20 18:54:34.444450 containerd[1726]: time="2025-06-20T18:54:34.443690203Z" level=info msg="StartContainer for \"1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a\"" Jun 20 18:54:34.487096 systemd[1]: Started cri-containerd-1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a.scope - libcontainer container 1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a. Jun 20 18:54:34.524442 systemd[1]: cri-containerd-1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a.scope: Deactivated successfully. Jun 20 18:54:34.526717 containerd[1726]: time="2025-06-20T18:54:34.526663383Z" level=info msg="StartContainer for \"1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a\" returns successfully" Jun 20 18:54:34.560960 sshd[5265]: Accepted publickey for core from 10.200.16.10 port 43940 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:54:34.562468 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:34.569584 systemd-logind[1699]: New session 28 of user core. Jun 20 18:54:34.571244 containerd[1726]: time="2025-06-20T18:54:34.570798704Z" level=info msg="shim disconnected" id=1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a namespace=k8s.io Jun 20 18:54:34.571244 containerd[1726]: time="2025-06-20T18:54:34.570881105Z" level=warning msg="cleaning up after shim disconnected" id=1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a namespace=k8s.io Jun 20 18:54:34.571244 containerd[1726]: time="2025-06-20T18:54:34.570895805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:34.572105 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 18:54:34.703631 systemd[1]: run-containerd-runc-k8s.io-1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a-runc.MnHYDO.mount: Deactivated successfully. Jun 20 18:54:34.703792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a6c959e65c297186b166ab544eb8ae712b5b08dc033d04d9306efbf03d17c3a-rootfs.mount: Deactivated successfully. Jun 20 18:54:35.407051 containerd[1726]: time="2025-06-20T18:54:35.406982178Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:54:35.461396 containerd[1726]: time="2025-06-20T18:54:35.461350620Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df\"" Jun 20 18:54:35.463376 containerd[1726]: time="2025-06-20T18:54:35.462338932Z" level=info msg="StartContainer for \"71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df\"" Jun 20 18:54:35.498082 systemd[1]: Started cri-containerd-71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df.scope - libcontainer container 71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df. Jun 20 18:54:35.522212 systemd[1]: cri-containerd-71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df.scope: Deactivated successfully. Jun 20 18:54:35.529039 containerd[1726]: time="2025-06-20T18:54:35.528996519Z" level=info msg="StartContainer for \"71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df\" returns successfully" Jun 20 18:54:35.567065 containerd[1726]: time="2025-06-20T18:54:35.566965267Z" level=info msg="shim disconnected" id=71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df namespace=k8s.io Jun 20 18:54:35.567065 containerd[1726]: time="2025-06-20T18:54:35.567041068Z" level=warning msg="cleaning up after shim disconnected" id=71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df namespace=k8s.io Jun 20 18:54:35.567065 containerd[1726]: time="2025-06-20T18:54:35.567054368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:35.703000 systemd[1]: run-containerd-runc-k8s.io-71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df-runc.8Hs3ze.mount: Deactivated successfully. Jun 20 18:54:35.703139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71a6bdf0438e8f4974dced7f8faaaf39c4db5f7acddec16e3d6b6de3effb02df-rootfs.mount: Deactivated successfully. Jun 20 18:54:36.410173 containerd[1726]: time="2025-06-20T18:54:36.410121223Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:54:36.477847 containerd[1726]: time="2025-06-20T18:54:36.477789822Z" level=info msg="CreateContainer within sandbox \"4e6b499f07c9a6617f2b97376f53d784e85a198f205e8bea7f1389e89e8ff1bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c600bca35ff2e1aefce96fbb521b3478115535670bafe68cd4a8ee635128451b\"" Jun 20 18:54:36.479816 containerd[1726]: time="2025-06-20T18:54:36.479774346Z" level=info msg="StartContainer for \"c600bca35ff2e1aefce96fbb521b3478115535670bafe68cd4a8ee635128451b\"" Jun 20 18:54:36.541103 systemd[1]: Started cri-containerd-c600bca35ff2e1aefce96fbb521b3478115535670bafe68cd4a8ee635128451b.scope - libcontainer container c600bca35ff2e1aefce96fbb521b3478115535670bafe68cd4a8ee635128451b. Jun 20 18:54:36.582808 containerd[1726]: time="2025-06-20T18:54:36.582758362Z" level=info msg="StartContainer for \"c600bca35ff2e1aefce96fbb521b3478115535670bafe68cd4a8ee635128451b\" returns successfully" Jun 20 18:54:37.110972 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 18:54:39.276486 systemd[1]: run-containerd-runc-k8s.io-c600bca35ff2e1aefce96fbb521b3478115535670bafe68cd4a8ee635128451b-runc.l3jTR9.mount: Deactivated successfully. Jun 20 18:54:39.984128 systemd-networkd[1330]: lxc_health: Link UP Jun 20 18:54:39.995656 systemd-networkd[1330]: lxc_health: Gained carrier Jun 20 18:54:40.914867 kubelet[3290]: I0620 18:54:40.914795 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zb4j9" podStartSLOduration=8.914768858 podStartE2EDuration="8.914768858s" podCreationTimestamp="2025-06-20 18:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:37.429616061 +0000 UTC m=+170.178575106" watchObservedRunningTime="2025-06-20 18:54:40.914768858 +0000 UTC m=+173.663727903" Jun 20 18:54:41.257079 systemd-networkd[1330]: lxc_health: Gained IPv6LL Jun 20 18:54:41.438396 systemd[1]: run-containerd-runc-k8s.io-c600bca35ff2e1aefce96fbb521b3478115535670bafe68cd4a8ee635128451b-runc.ggqnUn.mount: Deactivated successfully. Jun 20 18:54:45.966786 sshd[5317]: Connection closed by 10.200.16.10 port 43940 Jun 20 18:54:45.967703 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:45.972072 systemd[1]: sshd@25-10.200.8.40:22-10.200.16.10:43940.service: Deactivated successfully. Jun 20 18:54:45.974250 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 18:54:45.975287 systemd-logind[1699]: Session 28 logged out. Waiting for processes to exit. Jun 20 18:54:45.976358 systemd-logind[1699]: Removed session 28. Jun 20 18:54:47.998714 containerd[1726]: time="2025-06-20T18:54:47.998632779Z" level=info msg="StopPodSandbox for \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\"" Jun 20 18:54:47.999458 containerd[1726]: time="2025-06-20T18:54:47.998745480Z" level=info msg="TearDown network for sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" successfully" Jun 20 18:54:47.999458 containerd[1726]: time="2025-06-20T18:54:47.998760880Z" level=info msg="StopPodSandbox for \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" returns successfully" Jun 20 18:54:47.999458 containerd[1726]: time="2025-06-20T18:54:47.999224584Z" level=info msg="RemovePodSandbox for \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\"" Jun 20 18:54:47.999458 containerd[1726]: time="2025-06-20T18:54:47.999256985Z" level=info msg="Forcibly stopping sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\"" Jun 20 18:54:47.999458 containerd[1726]: time="2025-06-20T18:54:47.999321085Z" level=info msg="TearDown network for sandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" successfully" Jun 20 18:54:48.017537 containerd[1726]: time="2025-06-20T18:54:48.017465164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:54:48.017885 containerd[1726]: time="2025-06-20T18:54:48.017553465Z" level=info msg="RemovePodSandbox \"c759c11e18905e3c94514815448408adaaee413d20d0c8f400ebd18202cfb761\" returns successfully" Jun 20 18:54:48.018276 containerd[1726]: time="2025-06-20T18:54:48.018244171Z" level=info msg="StopPodSandbox for \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\"" Jun 20 18:54:48.018413 containerd[1726]: time="2025-06-20T18:54:48.018345072Z" level=info msg="TearDown network for sandbox \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" successfully" Jun 20 18:54:48.018413 containerd[1726]: time="2025-06-20T18:54:48.018367073Z" level=info msg="StopPodSandbox for \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" returns successfully" Jun 20 18:54:48.018764 containerd[1726]: time="2025-06-20T18:54:48.018714376Z" level=info msg="RemovePodSandbox for \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\"" Jun 20 18:54:48.018764 containerd[1726]: time="2025-06-20T18:54:48.018743976Z" level=info msg="Forcibly stopping sandbox \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\"" Jun 20 18:54:48.018865 containerd[1726]: time="2025-06-20T18:54:48.018810077Z" level=info msg="TearDown network for sandbox \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" successfully" Jun 20 18:54:48.034966 containerd[1726]: time="2025-06-20T18:54:48.034867335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:54:48.034966 containerd[1726]: time="2025-06-20T18:54:48.034961736Z" level=info msg="RemovePodSandbox \"2e8e36465781ef19c59fa03125787df6f38eb70fe6ec702f505965432db4cbf3\" returns successfully"