Sep 12 10:14:59.124841 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 08:42:12 -00 2025 Sep 12 10:14:59.124871 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:14:59.124884 kernel: BIOS-provided physical RAM map: Sep 12 10:14:59.124890 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 10:14:59.126948 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 12 10:14:59.126965 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 12 10:14:59.126979 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Sep 12 10:14:59.126993 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable Sep 12 10:14:59.127018 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data Sep 12 10:14:59.127031 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 12 10:14:59.127043 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 12 10:14:59.127054 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 12 10:14:59.127070 kernel: printk: bootconsole [earlyser0] enabled Sep 12 10:14:59.127083 kernel: NX (Execute Disable) protection: active Sep 12 10:14:59.127104 kernel: APIC: Static calls initialized Sep 12 10:14:59.127118 kernel: efi: EFI v2.7 by Microsoft Sep 12 10:14:59.127133 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f339a98 RNG=0x3ffd2018 Sep 12 10:14:59.127149 kernel: random: crng init done Sep 12 10:14:59.127163 kernel: secureboot: Secure boot disabled Sep 12 10:14:59.127176 kernel: SMBIOS 3.1.0 present. Sep 12 10:14:59.127192 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 10:14:59.127206 kernel: Hypervisor detected: Microsoft Hyper-V Sep 12 10:14:59.127219 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 12 10:14:59.127236 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 12 10:14:59.127255 kernel: Hyper-V: Nested features: 0x1e0101 Sep 12 10:14:59.127270 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 12 10:14:59.127287 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 12 10:14:59.127301 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 10:14:59.127316 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 10:14:59.127334 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 12 10:14:59.127348 kernel: tsc: Detected 2593.908 MHz processor Sep 12 10:14:59.127360 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 10:14:59.127372 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 10:14:59.127383 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 12 10:14:59.127397 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 10:14:59.127410 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 10:14:59.127421 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 12 10:14:59.127433 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 12 10:14:59.127445 kernel: Using GB pages for direct mapping Sep 12 10:14:59.127457 kernel: ACPI: Early table checksum verification disabled Sep 12 10:14:59.127477 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 12 10:14:59.127495 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127509 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127524 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 10:14:59.127538 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 12 10:14:59.127552 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127566 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127584 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127598 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127612 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127627 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 10:14:59.127642 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 12 10:14:59.127657 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Sep 12 10:14:59.127671 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 12 10:14:59.127685 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 12 10:14:59.127700 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 12 10:14:59.127718 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 12 10:14:59.127733 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 12 10:14:59.127748 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 12 10:14:59.127763 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 12 10:14:59.127776 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 10:14:59.127791 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 10:14:59.127804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 12 10:14:59.127817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 12 10:14:59.127830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 12 10:14:59.127847 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 12 10:14:59.127860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 12 10:14:59.127873 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 12 10:14:59.127886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 12 10:14:59.127939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 12 10:14:59.127952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 12 10:14:59.127964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 12 10:14:59.127977 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 12 10:14:59.127989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 12 10:14:59.128006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 12 10:14:59.128018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 12 10:14:59.128031 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 12 10:14:59.128045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 12 10:14:59.128060 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 12 10:14:59.128075 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 12 10:14:59.128089 kernel: Zone ranges: Sep 12 10:14:59.128103 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 10:14:59.128119 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 10:14:59.128133 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 10:14:59.128146 kernel: Movable zone start for each node Sep 12 10:14:59.128160 kernel: Early memory node ranges Sep 12 10:14:59.128174 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 10:14:59.128186 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 12 10:14:59.128199 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] Sep 12 10:14:59.128211 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 12 10:14:59.128224 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 10:14:59.128237 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 12 10:14:59.128252 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:14:59.128265 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 10:14:59.128278 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Sep 12 10:14:59.128291 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges Sep 12 10:14:59.128304 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 12 10:14:59.128316 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 12 10:14:59.128330 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 12 10:14:59.128343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 10:14:59.128356 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 10:14:59.128372 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 12 10:14:59.128385 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 10:14:59.128397 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 12 10:14:59.128410 kernel: Booting paravirtualized kernel on Hyper-V Sep 12 10:14:59.128423 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 10:14:59.128437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 10:14:59.128450 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 10:14:59.128463 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 10:14:59.128479 kernel: pcpu-alloc: [0] 0 1 Sep 12 10:14:59.128492 kernel: Hyper-V: PV spinlocks enabled Sep 12 10:14:59.128505 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 10:14:59.128521 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:14:59.128535 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 10:14:59.128548 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 10:14:59.128561 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 10:14:59.128575 kernel: Fallback order for Node 0: 0 Sep 12 10:14:59.128591 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062374 Sep 12 10:14:59.128614 kernel: Policy zone: Normal Sep 12 10:14:59.128629 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 10:14:59.128646 kernel: software IO TLB: area num 2. Sep 12 10:14:59.128660 kernel: Memory: 8072560K/8387508K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 314692K reserved, 0K cma-reserved) Sep 12 10:14:59.128675 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 10:14:59.128689 kernel: ftrace: allocating 37946 entries in 149 pages Sep 12 10:14:59.128703 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 10:14:59.128717 kernel: Dynamic Preempt: voluntary Sep 12 10:14:59.128731 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 10:14:59.128746 kernel: rcu: RCU event tracing is enabled. Sep 12 10:14:59.128764 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 10:14:59.128778 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 10:14:59.128792 kernel: Rude variant of Tasks RCU enabled. Sep 12 10:14:59.128806 kernel: Tracing variant of Tasks RCU enabled. Sep 12 10:14:59.128821 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 10:14:59.128835 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 10:14:59.128852 kernel: Using NULL legacy PIC Sep 12 10:14:59.128867 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 12 10:14:59.128881 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 10:14:59.130921 kernel: Console: colour dummy device 80x25 Sep 12 10:14:59.130937 kernel: printk: console [tty1] enabled Sep 12 10:14:59.130946 kernel: printk: console [ttyS0] enabled Sep 12 10:14:59.130955 kernel: printk: bootconsole [earlyser0] disabled Sep 12 10:14:59.130963 kernel: ACPI: Core revision 20230628 Sep 12 10:14:59.130975 kernel: Failed to register legacy timer interrupt Sep 12 10:14:59.130988 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 10:14:59.130996 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 10:14:59.131006 kernel: Hyper-V: Using IPI hypercalls Sep 12 10:14:59.131016 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 12 10:14:59.131024 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 12 10:14:59.131033 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 12 10:14:59.131044 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 12 10:14:59.131053 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 12 10:14:59.131061 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 12 10:14:59.131072 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Sep 12 10:14:59.131081 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 10:14:59.131089 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 10:14:59.131097 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 10:14:59.131105 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 10:14:59.131113 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 10:14:59.131121 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 10:14:59.131129 kernel: RETBleed: Vulnerable Sep 12 10:14:59.131137 kernel: Speculative Store Bypass: Vulnerable Sep 12 10:14:59.131145 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 10:14:59.131156 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 10:14:59.131163 kernel: active return thunk: its_return_thunk Sep 12 10:14:59.131171 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 10:14:59.131179 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 10:14:59.131187 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 10:14:59.131195 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 10:14:59.131203 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 10:14:59.131211 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 10:14:59.131219 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 10:14:59.131227 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 10:14:59.131238 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 12 10:14:59.131249 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 12 10:14:59.131260 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 12 10:14:59.131269 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 12 10:14:59.131277 kernel: Freeing SMP alternatives memory: 32K Sep 12 10:14:59.131285 kernel: pid_max: default: 32768 minimum: 301 Sep 12 10:14:59.131293 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 10:14:59.131301 kernel: landlock: Up and running. Sep 12 10:14:59.131312 kernel: SELinux: Initializing. Sep 12 10:14:59.131321 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 10:14:59.131329 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 10:14:59.131340 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 10:14:59.131349 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:14:59.131361 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:14:59.131371 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:14:59.131380 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 10:14:59.131391 kernel: signal: max sigframe size: 3632 Sep 12 10:14:59.131400 kernel: rcu: Hierarchical SRCU implementation. Sep 12 10:14:59.131409 kernel: rcu: Max phase no-delay instances is 400. Sep 12 10:14:59.131420 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 10:14:59.131429 kernel: smp: Bringing up secondary CPUs ... Sep 12 10:14:59.131438 kernel: smpboot: x86: Booting SMP configuration: Sep 12 10:14:59.131450 kernel: .... node #0, CPUs: #1 Sep 12 10:14:59.131459 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 12 10:14:59.131472 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 10:14:59.131481 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 10:14:59.131490 kernel: smpboot: Max logical packages: 1 Sep 12 10:14:59.131501 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Sep 12 10:14:59.131509 kernel: devtmpfs: initialized Sep 12 10:14:59.131520 kernel: x86/mm: Memory block size: 128MB Sep 12 10:14:59.131531 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 12 10:14:59.131541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 10:14:59.131551 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 10:14:59.131560 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 10:14:59.131571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 10:14:59.131580 kernel: audit: initializing netlink subsys (disabled) Sep 12 10:14:59.131588 kernel: audit: type=2000 audit(1757672097.030:1): state=initialized audit_enabled=0 res=1 Sep 12 10:14:59.131599 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 10:14:59.131612 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 10:14:59.131624 kernel: cpuidle: using governor menu Sep 12 10:14:59.131635 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 10:14:59.131644 kernel: dca service started, version 1.12.1 Sep 12 10:14:59.131654 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 12 10:14:59.131663 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] Sep 12 10:14:59.131674 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 10:14:59.131682 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 10:14:59.131691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 10:14:59.131706 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 10:14:59.131715 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 10:14:59.131723 kernel: ACPI: Added _OSI(Module Device) Sep 12 10:14:59.131734 kernel: ACPI: Added _OSI(Processor Device) Sep 12 10:14:59.131742 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 10:14:59.131752 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 10:14:59.131762 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 10:14:59.131770 kernel: ACPI: Interpreter enabled Sep 12 10:14:59.131781 kernel: ACPI: PM: (supports S0 S5) Sep 12 10:14:59.131789 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 10:14:59.131801 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 10:14:59.131812 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 10:14:59.131820 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 12 10:14:59.131830 kernel: iommu: Default domain type: Translated Sep 12 10:14:59.131840 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 10:14:59.131848 kernel: efivars: Registered efivars operations Sep 12 10:14:59.131860 kernel: PCI: Using ACPI for IRQ routing Sep 12 10:14:59.131868 kernel: PCI: System does not support PCI Sep 12 10:14:59.131876 kernel: vgaarb: loaded Sep 12 10:14:59.131889 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 12 10:14:59.131907 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 10:14:59.131917 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 10:14:59.131925 kernel: pnp: PnP ACPI init Sep 12 10:14:59.131936 kernel: pnp: PnP ACPI: found 3 devices Sep 12 10:14:59.131945 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 10:14:59.131954 kernel: NET: Registered PF_INET protocol family Sep 12 10:14:59.131966 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 10:14:59.131974 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 10:14:59.131987 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 10:14:59.131998 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 10:14:59.132006 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 10:14:59.132018 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 10:14:59.132026 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 10:14:59.132034 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 10:14:59.132046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 10:14:59.132054 kernel: NET: Registered PF_XDP protocol family Sep 12 10:14:59.132064 kernel: PCI: CLS 0 bytes, default 64 Sep 12 10:14:59.132076 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 10:14:59.132085 kernel: software IO TLB: mapped [mem 0x000000003b339000-0x000000003f339000] (64MB) Sep 12 10:14:59.132097 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 10:14:59.132106 kernel: Initialise system trusted keyrings Sep 12 10:14:59.132117 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 10:14:59.132126 kernel: Key type asymmetric registered Sep 12 10:14:59.132137 kernel: Asymmetric key parser 'x509' registered Sep 12 10:14:59.132146 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 10:14:59.132159 kernel: io scheduler mq-deadline registered Sep 12 10:14:59.132168 kernel: io scheduler kyber registered Sep 12 10:14:59.132176 kernel: io scheduler bfq registered Sep 12 10:14:59.132188 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 10:14:59.132196 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 10:14:59.132207 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 10:14:59.132216 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 10:14:59.132225 kernel: i8042: PNP: No PS/2 controller found. Sep 12 10:14:59.132388 kernel: rtc_cmos 00:02: registered as rtc0 Sep 12 10:14:59.132487 kernel: rtc_cmos 00:02: setting system clock to 2025-09-12T10:14:58 UTC (1757672098) Sep 12 10:14:59.132583 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 12 10:14:59.132598 kernel: intel_pstate: CPU model not supported Sep 12 10:14:59.132607 kernel: efifb: probing for efifb Sep 12 10:14:59.132619 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 10:14:59.132628 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 10:14:59.132639 kernel: efifb: scrolling: redraw Sep 12 10:14:59.132648 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 10:14:59.132661 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 10:14:59.132670 kernel: fb0: EFI VGA frame buffer device Sep 12 10:14:59.132679 kernel: pstore: Using crash dump compression: deflate Sep 12 10:14:59.132690 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 10:14:59.132699 kernel: NET: Registered PF_INET6 protocol family Sep 12 10:14:59.132708 kernel: Segment Routing with IPv6 Sep 12 10:14:59.132718 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 10:14:59.132726 kernel: NET: Registered PF_PACKET protocol family Sep 12 10:14:59.132738 kernel: Key type dns_resolver registered Sep 12 10:14:59.132749 kernel: IPI shorthand broadcast: enabled Sep 12 10:14:59.132758 kernel: sched_clock: Marking stable (905074100, 51726100)->(1212381600, -255581400) Sep 12 10:14:59.132768 kernel: registered taskstats version 1 Sep 12 10:14:59.132777 kernel: Loading compiled-in X.509 certificates Sep 12 10:14:59.132788 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: 0972efc09ee0bcd53f8cdb5573e11871ce7b16a9' Sep 12 10:14:59.132797 kernel: Key type .fscrypt registered Sep 12 10:14:59.132805 kernel: Key type fscrypt-provisioning registered Sep 12 10:14:59.132816 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 10:14:59.132825 kernel: ima: Allocated hash algorithm: sha1 Sep 12 10:14:59.132838 kernel: ima: No architecture policies found Sep 12 10:14:59.132848 kernel: clk: Disabling unused clocks Sep 12 10:14:59.132856 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 12 10:14:59.132868 kernel: Write protecting the kernel read-only data: 38912k Sep 12 10:14:59.132876 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 12 10:14:59.132885 kernel: Run /init as init process Sep 12 10:14:59.134778 kernel: with arguments: Sep 12 10:14:59.134798 kernel: /init Sep 12 10:14:59.134814 kernel: with environment: Sep 12 10:14:59.134834 kernel: HOME=/ Sep 12 10:14:59.134848 kernel: TERM=linux Sep 12 10:14:59.134863 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 10:14:59.134880 systemd[1]: Successfully made /usr/ read-only. Sep 12 10:14:59.134911 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:14:59.134928 systemd[1]: Detected virtualization microsoft. Sep 12 10:14:59.134943 systemd[1]: Detected architecture x86-64. Sep 12 10:14:59.134957 systemd[1]: Running in initrd. Sep 12 10:14:59.134977 systemd[1]: No hostname configured, using default hostname. Sep 12 10:14:59.134993 systemd[1]: Hostname set to . Sep 12 10:14:59.135009 systemd[1]: Initializing machine ID from random generator. Sep 12 10:14:59.135026 systemd[1]: Queued start job for default target initrd.target. Sep 12 10:14:59.135042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:14:59.135059 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:14:59.135077 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 10:14:59.135092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:14:59.135111 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 10:14:59.135128 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 10:14:59.135145 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 10:14:59.135161 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 10:14:59.135177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:14:59.135192 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:14:59.135211 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:14:59.135226 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:14:59.135240 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:14:59.135255 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:14:59.135271 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:14:59.135286 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:14:59.135302 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 10:14:59.135317 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 10:14:59.135333 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:14:59.135352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:14:59.135367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:14:59.135383 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:14:59.135399 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 10:14:59.135415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:14:59.135431 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 10:14:59.135446 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 10:14:59.135462 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:14:59.135477 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:14:59.135526 systemd-journald[177]: Collecting audit messages is disabled. Sep 12 10:14:59.135563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:14:59.135579 systemd-journald[177]: Journal started Sep 12 10:14:59.135624 systemd-journald[177]: Runtime Journal (/run/log/journal/5334a1ec3ad8489d8cd79de956050980) is 8M, max 158.8M, 150.8M free. Sep 12 10:14:59.146052 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 10:14:59.155158 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:14:59.155343 systemd-modules-load[179]: Inserted module 'overlay' Sep 12 10:14:59.160383 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:14:59.168073 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 10:14:59.187403 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:14:59.202017 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 10:14:59.202050 kernel: Bridge firewalling registered Sep 12 10:14:59.201234 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 12 10:14:59.204300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:14:59.216253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:14:59.224417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:14:59.233653 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:14:59.235147 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:14:59.245288 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:14:59.252040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:14:59.256945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:14:59.275416 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:14:59.285974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:14:59.289689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:14:59.305095 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 10:14:59.312068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:14:59.328631 dracut-cmdline[214]: dracut-dracut-053 Sep 12 10:14:59.336820 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:14:59.351657 systemd-resolved[215]: Positive Trust Anchors: Sep 12 10:14:59.351667 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:14:59.351705 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:14:59.354558 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 12 10:14:59.355646 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:14:59.362697 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:14:59.462932 kernel: SCSI subsystem initialized Sep 12 10:14:59.472922 kernel: Loading iSCSI transport class v2.0-870. Sep 12 10:14:59.484914 kernel: iscsi: registered transport (tcp) Sep 12 10:14:59.507664 kernel: iscsi: registered transport (qla4xxx) Sep 12 10:14:59.507758 kernel: QLogic iSCSI HBA Driver Sep 12 10:14:59.544540 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 10:14:59.557041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 10:14:59.589759 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 10:14:59.589849 kernel: device-mapper: uevent: version 1.0.3 Sep 12 10:14:59.593359 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 10:14:59.633922 kernel: raid6: avx512x4 gen() 18149 MB/s Sep 12 10:14:59.652905 kernel: raid6: avx512x2 gen() 18199 MB/s Sep 12 10:14:59.671907 kernel: raid6: avx512x1 gen() 18043 MB/s Sep 12 10:14:59.690911 kernel: raid6: avx2x4 gen() 18121 MB/s Sep 12 10:14:59.709906 kernel: raid6: avx2x2 gen() 18228 MB/s Sep 12 10:14:59.729834 kernel: raid6: avx2x1 gen() 13716 MB/s Sep 12 10:14:59.729865 kernel: raid6: using algorithm avx2x2 gen() 18228 MB/s Sep 12 10:14:59.752103 kernel: raid6: .... xor() 22227 MB/s, rmw enabled Sep 12 10:14:59.752145 kernel: raid6: using avx512x2 recovery algorithm Sep 12 10:14:59.775924 kernel: xor: automatically using best checksumming function avx Sep 12 10:14:59.919925 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 10:14:59.930076 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:14:59.938203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:14:59.958029 systemd-udevd[398]: Using default interface naming scheme 'v255'. Sep 12 10:14:59.963210 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:14:59.981140 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 10:14:59.994605 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 12 10:15:00.024247 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:15:00.036271 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:15:00.080383 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:15:00.096177 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 10:15:00.125181 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 10:15:00.134521 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:15:00.142236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:15:00.149340 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:15:00.160178 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 10:15:00.177011 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 10:15:00.199927 kernel: hv_vmbus: Vmbus version:5.2 Sep 12 10:15:00.200407 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:15:00.216805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:15:00.220580 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:15:00.228119 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:15:00.236398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:15:00.236614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:15:00.295116 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 10:15:00.295163 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 10:15:00.295182 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 10:15:00.295200 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 10:15:00.295219 kernel: PTP clock support registered Sep 12 10:15:00.295236 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 10:15:00.295252 kernel: AES CTR mode by8 optimization enabled Sep 12 10:15:00.295270 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 10:15:00.295287 kernel: scsi host0: storvsc_host_t Sep 12 10:15:00.240303 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:15:00.301911 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 10:15:00.301944 kernel: hv_vmbus: registering driver hv_utils Sep 12 10:15:00.254134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:15:00.314138 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 10:15:00.314172 kernel: scsi host1: storvsc_host_t Sep 12 10:15:00.314351 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 10:15:00.314365 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 10:15:00.314390 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 12 10:15:00.309905 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:15:01.346318 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 10:15:01.346351 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 10:15:01.346383 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 10:15:00.333695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:15:00.333792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:15:01.341502 systemd-resolved[215]: Clock change detected. Flushing caches. Sep 12 10:15:01.372751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:15:01.401362 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 10:15:01.401660 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 10:15:01.409395 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 10:15:01.409634 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 10:15:01.409757 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 10:15:01.417121 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 10:15:01.417400 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 10:15:01.417421 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 10:15:01.417735 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 10:15:01.426003 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 10:15:01.426271 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 10:15:01.427031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:15:01.437349 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:15:01.451432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:15:01.451472 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 10:15:01.475325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#39 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 10:15:01.480767 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:15:01.504974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#308 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 10:15:01.519969 kernel: hv_netvsc 7ced8d47-bb4f-7ced-8d47-bb4f7ced8d47 eth0: VF slot 1 added Sep 12 10:15:01.528973 kernel: hv_vmbus: registering driver hv_pci Sep 12 10:15:01.533971 kernel: hv_pci 0a564f23-8ad8-453b-ae94-d54578c50fa6: PCI VMBus probing: Using version 0x10004 Sep 12 10:15:01.538972 kernel: hv_pci 0a564f23-8ad8-453b-ae94-d54578c50fa6: PCI host bridge to bus 8ad8:00 Sep 12 10:15:01.539175 kernel: pci_bus 8ad8:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 12 10:15:01.544544 kernel: pci_bus 8ad8:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 10:15:01.550050 kernel: pci 8ad8:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 12 10:15:01.557997 kernel: pci 8ad8:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 12 10:15:01.562057 kernel: pci 8ad8:00:02.0: enabling Extended Tags Sep 12 10:15:01.574056 kernel: pci 8ad8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8ad8:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 12 10:15:01.580589 kernel: pci_bus 8ad8:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 10:15:01.580978 kernel: pci 8ad8:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 12 10:15:01.747027 kernel: mlx5_core 8ad8:00:02.0: enabling device (0000 -> 0002) Sep 12 10:15:01.751979 kernel: mlx5_core 8ad8:00:02.0: firmware version: 14.30.5000 Sep 12 10:15:01.965105 kernel: hv_netvsc 7ced8d47-bb4f-7ced-8d47-bb4f7ced8d47 eth0: VF registering: eth1 Sep 12 10:15:01.965428 kernel: mlx5_core 8ad8:00:02.0 eth1: joined to eth0 Sep 12 10:15:01.971630 kernel: mlx5_core 8ad8:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 10:15:01.984972 kernel: mlx5_core 8ad8:00:02.0 enP35544s1: renamed from eth1 Sep 12 10:15:02.001973 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Sep 12 10:15:02.027763 kernel: BTRFS: device fsid 2566299d-dd4a-4826-ba43-7397a17991fb devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (456) Sep 12 10:15:02.040202 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 10:15:02.061441 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 10:15:02.073695 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 10:15:02.075004 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 10:15:02.093474 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 10:15:02.105118 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 10:15:02.125975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:15:02.134993 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:15:03.146735 disk-uuid[610]: The operation has completed successfully. Sep 12 10:15:03.150841 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:15:03.242164 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 10:15:03.242299 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 10:15:03.295125 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 10:15:03.302924 sh[696]: Success Sep 12 10:15:03.332974 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 10:15:03.652118 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 10:15:03.658499 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 10:15:03.665387 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 10:15:03.687064 kernel: BTRFS info (device dm-0): first mount of filesystem 2566299d-dd4a-4826-ba43-7397a17991fb Sep 12 10:15:03.687157 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:15:03.690789 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 10:15:03.693964 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 10:15:03.696612 kernel: BTRFS info (device dm-0): using free space tree Sep 12 10:15:04.030867 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 10:15:04.034356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 10:15:04.042181 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 10:15:04.047914 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 10:15:04.079289 kernel: BTRFS info (device sda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:15:04.079366 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:15:04.079388 kernel: BTRFS info (device sda6): using free space tree Sep 12 10:15:04.120023 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 10:15:04.129014 kernel: BTRFS info (device sda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:15:04.143689 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:15:04.151452 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 10:15:04.164192 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 10:15:04.172848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:15:04.213772 systemd-networkd[877]: lo: Link UP Sep 12 10:15:04.213784 systemd-networkd[877]: lo: Gained carrier Sep 12 10:15:04.216104 systemd-networkd[877]: Enumeration completed Sep 12 10:15:04.216365 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:15:04.221928 systemd[1]: Reached target network.target - Network. Sep 12 10:15:04.224433 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:15:04.224441 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:15:04.286972 kernel: mlx5_core 8ad8:00:02.0 enP35544s1: Link up Sep 12 10:15:04.287280 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 10:15:04.322651 kernel: hv_netvsc 7ced8d47-bb4f-7ced-8d47-bb4f7ced8d47 eth0: Data path switched to VF: enP35544s1 Sep 12 10:15:04.320971 systemd-networkd[877]: enP35544s1: Link UP Sep 12 10:15:04.321108 systemd-networkd[877]: eth0: Link UP Sep 12 10:15:04.321310 systemd-networkd[877]: eth0: Gained carrier Sep 12 10:15:04.321325 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:15:04.341068 systemd-networkd[877]: enP35544s1: Gained carrier Sep 12 10:15:04.375016 systemd-networkd[877]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 10:15:04.871332 ignition[876]: Ignition 2.20.0 Sep 12 10:15:04.871345 ignition[876]: Stage: fetch-offline Sep 12 10:15:04.871389 ignition[876]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:15:04.879478 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:15:04.871399 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 10:15:04.871502 ignition[876]: parsed url from cmdline: "" Sep 12 10:15:04.871506 ignition[876]: no config URL provided Sep 12 10:15:04.871513 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:15:04.894164 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 10:15:04.871524 ignition[876]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:15:04.871530 ignition[876]: failed to fetch config: resource requires networking Sep 12 10:15:04.878131 ignition[876]: Ignition finished successfully Sep 12 10:15:04.911932 ignition[886]: Ignition 2.20.0 Sep 12 10:15:04.911943 ignition[886]: Stage: fetch Sep 12 10:15:04.912199 ignition[886]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:15:04.912213 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 10:15:04.916374 ignition[886]: parsed url from cmdline: "" Sep 12 10:15:04.916584 ignition[886]: no config URL provided Sep 12 10:15:04.916592 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:15:04.916607 ignition[886]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:15:04.916641 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 10:15:05.002060 ignition[886]: GET result: OK Sep 12 10:15:05.002152 ignition[886]: config has been read from IMDS userdata Sep 12 10:15:05.002180 ignition[886]: parsing config with SHA512: 7d43aa00bd4ca07ab502cd9b298799ccd26d96d4a6462c6787c01a5cf30537efb2bd74822f3a22a573896dfbc557760295ee866c54e9779a625f276a9ed93df5 Sep 12 10:15:05.010989 unknown[886]: fetched base config from "system" Sep 12 10:15:05.011045 unknown[886]: fetched base config from "system" Sep 12 10:15:05.011498 ignition[886]: fetch: fetch complete Sep 12 10:15:05.011054 unknown[886]: fetched user config from "azure" Sep 12 10:15:05.011503 ignition[886]: fetch: fetch passed Sep 12 10:15:05.014892 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 10:15:05.011557 ignition[886]: Ignition finished successfully Sep 12 10:15:05.040172 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 10:15:05.058100 ignition[894]: Ignition 2.20.0 Sep 12 10:15:05.058118 ignition[894]: Stage: kargs Sep 12 10:15:05.062235 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 10:15:05.058344 ignition[894]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:15:05.058357 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 10:15:05.059179 ignition[894]: kargs: kargs passed Sep 12 10:15:05.059228 ignition[894]: Ignition finished successfully Sep 12 10:15:05.081157 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 10:15:05.094912 ignition[900]: Ignition 2.20.0 Sep 12 10:15:05.094925 ignition[900]: Stage: disks Sep 12 10:15:05.096794 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 10:15:05.095171 ignition[900]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:15:05.101116 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 10:15:05.095186 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 10:15:05.108162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 10:15:05.095936 ignition[900]: disks: disks passed Sep 12 10:15:05.114144 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:15:05.095997 ignition[900]: Ignition finished successfully Sep 12 10:15:05.119904 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:15:05.122723 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:15:05.141433 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 10:15:05.193484 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 10:15:05.198506 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 10:15:05.211111 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 10:15:05.306968 kernel: EXT4-fs (sda9): mounted filesystem 4caafea7-bbab-4a47-b77b-37af606fc08b r/w with ordered data mode. Quota mode: none. Sep 12 10:15:05.307367 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 10:15:05.309302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 10:15:05.366067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:15:05.383313 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (919) Sep 12 10:15:05.390440 kernel: BTRFS info (device sda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:15:05.390524 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:15:05.390390 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 10:15:05.396132 kernel: BTRFS info (device sda6): using free space tree Sep 12 10:15:05.399784 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 10:15:05.413564 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 10:15:05.405375 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 10:15:05.405422 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:15:05.416280 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 10:15:05.430213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:15:05.439097 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 10:15:05.548192 systemd-networkd[877]: eth0: Gained IPv6LL Sep 12 10:15:06.048001 coreos-metadata[934]: Sep 12 10:15:06.047 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 10:15:06.052999 coreos-metadata[934]: Sep 12 10:15:06.050 INFO Fetch successful Sep 12 10:15:06.052999 coreos-metadata[934]: Sep 12 10:15:06.050 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 10:15:06.062077 coreos-metadata[934]: Sep 12 10:15:06.061 INFO Fetch successful Sep 12 10:15:06.076129 coreos-metadata[934]: Sep 12 10:15:06.076 INFO wrote hostname ci-4230.2.2-n-6349f41dc3 to /sysroot/etc/hostname Sep 12 10:15:06.076933 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 10:15:06.154404 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 10:15:06.185347 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Sep 12 10:15:06.213219 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 10:15:06.229261 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 10:15:07.063652 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 10:15:07.081109 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 10:15:07.093215 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 10:15:07.103443 kernel: BTRFS info (device sda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:15:07.101704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 10:15:07.134261 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 10:15:07.139727 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 10:15:07.146414 ignition[1038]: INFO : Ignition 2.20.0 Sep 12 10:15:07.146414 ignition[1038]: INFO : Stage: mount Sep 12 10:15:07.146414 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:15:07.146414 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 10:15:07.146414 ignition[1038]: INFO : mount: mount passed Sep 12 10:15:07.146414 ignition[1038]: INFO : Ignition finished successfully Sep 12 10:15:07.156736 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 10:15:07.167157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:15:07.187973 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1051) Sep 12 10:15:07.191969 kernel: BTRFS info (device sda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:15:07.192008 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:15:07.197118 kernel: BTRFS info (device sda6): using free space tree Sep 12 10:15:07.208977 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 10:15:07.211123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:15:07.247934 ignition[1068]: INFO : Ignition 2.20.0 Sep 12 10:15:07.247934 ignition[1068]: INFO : Stage: files Sep 12 10:15:07.252892 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:15:07.252892 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 10:15:07.252892 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Sep 12 10:15:07.263996 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 10:15:07.263996 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 10:15:07.367253 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 10:15:07.371898 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 10:15:07.371898 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 10:15:07.367683 unknown[1068]: wrote ssh authorized keys file for user: core Sep 12 10:15:07.393631 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 10:15:07.399099 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 10:15:07.445788 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 10:15:07.554439 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 10:15:07.554439 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:15:07.565263 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 10:15:07.762179 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 10:15:07.881545 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:15:07.887028 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 10:15:07.887028 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 10:15:07.897296 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:15:07.897296 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:15:07.897296 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:15:07.912054 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:15:07.912054 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:15:07.912054 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:15:07.928037 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:15:07.933130 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:15:07.938519 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:15:07.938519 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:15:07.938519 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:15:07.938519 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 10:15:08.386338 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 10:15:08.657781 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:15:08.657781 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 10:15:08.691816 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:15:08.698375 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:15:08.698375 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 10:15:08.698375 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 10:15:08.698375 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 10:15:08.698375 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:15:08.698375 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:15:08.698375 ignition[1068]: INFO : files: files passed Sep 12 10:15:08.698375 ignition[1068]: INFO : Ignition finished successfully Sep 12 10:15:08.693906 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 10:15:08.718433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 10:15:08.748183 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 10:15:08.757269 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 10:15:08.759011 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 10:15:08.771864 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:15:08.771864 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:15:08.781588 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:15:08.777939 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:15:08.785691 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 10:15:08.803181 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 10:15:08.825989 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 10:15:08.826117 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 10:15:08.832789 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 10:15:08.838701 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 10:15:08.844843 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 10:15:08.853204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 10:15:08.870222 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:15:08.884127 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 10:15:08.897362 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:15:08.898818 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:15:08.899738 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 10:15:08.900177 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 10:15:08.900324 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:15:08.901140 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 10:15:08.901597 systemd[1]: Stopped target basic.target - Basic System. Sep 12 10:15:08.902032 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 10:15:08.902573 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:15:08.903097 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 10:15:08.903556 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 10:15:08.904029 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:15:08.904478 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 10:15:08.904938 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 10:15:08.905946 systemd[1]: Stopped target swap.target - Swaps. Sep 12 10:15:08.906382 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 10:15:08.906517 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:15:08.907363 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:15:08.907870 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:15:08.908306 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 10:15:08.948019 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:15:08.955622 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 10:15:08.955767 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 10:15:08.971984 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 10:15:08.972138 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:15:08.979594 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 10:15:08.979748 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 10:15:08.988606 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 10:15:08.997583 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 10:15:09.021659 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 10:15:09.073833 ignition[1121]: INFO : Ignition 2.20.0 Sep 12 10:15:09.073833 ignition[1121]: INFO : Stage: umount Sep 12 10:15:09.073833 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:15:09.073833 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 10:15:09.073833 ignition[1121]: INFO : umount: umount passed Sep 12 10:15:09.073833 ignition[1121]: INFO : Ignition finished successfully Sep 12 10:15:09.054879 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 10:15:09.057549 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 10:15:09.057789 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:15:09.061763 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 10:15:09.061906 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:15:09.067651 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 10:15:09.067775 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 10:15:09.080819 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 10:15:09.081143 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 10:15:09.089075 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 10:15:09.089152 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 10:15:09.092419 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 10:15:09.092470 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 10:15:09.092880 systemd[1]: Stopped target network.target - Network. Sep 12 10:15:09.093342 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 10:15:09.093382 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:15:09.093822 systemd[1]: Stopped target paths.target - Path Units. Sep 12 10:15:09.095746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 10:15:09.126566 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:15:09.132068 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 10:15:09.137324 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 10:15:09.142440 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 10:15:09.144875 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:15:09.150269 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 10:15:09.150324 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:15:09.157707 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 10:15:09.157776 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 10:15:09.162871 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 10:15:09.162926 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 10:15:09.168633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 10:15:09.173414 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 10:15:09.189367 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 10:15:09.199102 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 10:15:09.202571 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 10:15:09.217487 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 10:15:09.217609 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 10:15:09.226418 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 10:15:09.226674 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 10:15:09.226771 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 10:15:09.231713 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 10:15:09.234427 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 10:15:09.234498 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:15:09.259836 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 10:15:09.265496 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 10:15:09.268219 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:15:09.274520 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:15:09.278173 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:15:09.283597 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 10:15:09.283655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 10:15:09.289107 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 10:15:09.289169 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:15:09.304692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:15:09.316163 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:15:09.319711 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:15:09.325437 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 10:15:09.325619 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:15:09.332775 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 10:15:09.332856 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 10:15:09.338426 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 10:15:09.338466 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:15:09.344158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 10:15:09.344218 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:15:09.350855 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 10:15:09.350904 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 10:15:09.377084 kernel: hv_netvsc 7ced8d47-bb4f-7ced-8d47-bb4f7ced8d47 eth0: Data path switched from VF: enP35544s1 Sep 12 10:15:09.362770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:15:09.362851 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:15:09.385138 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 10:15:09.389038 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 10:15:09.389107 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:15:09.395080 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 10:15:09.395129 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:15:09.401601 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 10:15:09.401665 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:15:09.419696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:15:09.419762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:15:09.432598 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 10:15:09.432685 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:15:09.433093 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 10:15:09.433200 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 10:15:09.448988 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 10:15:09.449121 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 10:15:09.744496 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 10:15:09.744634 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 10:15:09.747919 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 10:15:09.752965 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 10:15:09.753042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 10:15:09.765161 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 10:15:09.859541 systemd[1]: Switching root. Sep 12 10:15:09.932675 systemd-journald[177]: Journal stopped Sep 12 10:15:17.001747 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Sep 12 10:15:17.001805 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 10:15:17.001824 kernel: SELinux: policy capability open_perms=1 Sep 12 10:15:17.001837 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 10:15:17.001850 kernel: SELinux: policy capability always_check_network=0 Sep 12 10:15:17.001864 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 10:15:17.001879 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 10:15:17.001901 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 10:15:17.001917 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 10:15:17.001934 kernel: audit: type=1403 audit(1757672111.271:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 10:15:17.001978 systemd[1]: Successfully loaded SELinux policy in 169.524ms. Sep 12 10:15:17.001995 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.697ms. Sep 12 10:15:17.002012 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:15:17.002040 systemd[1]: Detected virtualization microsoft. Sep 12 10:15:17.002062 systemd[1]: Detected architecture x86-64. Sep 12 10:15:17.002077 systemd[1]: Detected first boot. Sep 12 10:15:17.002095 systemd[1]: Hostname set to . Sep 12 10:15:17.002111 systemd[1]: Initializing machine ID from random generator. Sep 12 10:15:17.002126 zram_generator::config[1167]: No configuration found. Sep 12 10:15:17.002145 kernel: Guest personality initialized and is inactive Sep 12 10:15:17.002159 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 12 10:15:17.002174 kernel: Initialized host personality Sep 12 10:15:17.002188 kernel: NET: Registered PF_VSOCK protocol family Sep 12 10:15:17.002202 systemd[1]: Populated /etc with preset unit settings. Sep 12 10:15:17.002220 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 10:15:17.002235 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 10:15:17.002251 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 10:15:17.002270 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 10:15:17.002285 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 10:15:17.002305 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 10:15:17.002321 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 10:15:17.002339 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 10:15:17.002355 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 10:15:17.002371 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 10:15:17.002393 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 10:15:17.002410 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 10:15:17.002428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:15:17.002446 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:15:17.002462 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 10:15:17.002480 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 10:15:17.002501 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 10:15:17.002518 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:15:17.002538 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 10:15:17.002560 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:15:17.002579 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 10:15:17.002599 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 10:15:17.002617 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 10:15:17.002635 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 10:15:17.002653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:15:17.002671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:15:17.002693 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:15:17.002710 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:15:17.002729 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 10:15:17.002746 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 10:15:17.002763 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 10:15:17.002781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:15:17.002802 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:15:17.002820 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:15:17.002836 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 10:15:17.002854 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 10:15:17.002870 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 10:15:17.002888 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 10:15:17.002904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:17.002924 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 10:15:17.002941 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 10:15:17.003064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 10:15:17.003085 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 10:15:17.003102 systemd[1]: Reached target machines.target - Containers. Sep 12 10:15:17.003120 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 10:15:17.003138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:15:17.003154 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:15:17.003177 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 10:15:17.003193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:15:17.003210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:15:17.003227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:15:17.003243 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 10:15:17.003260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:15:17.003277 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 10:15:17.003294 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 10:15:17.003314 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 10:15:17.003330 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 10:15:17.003347 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 10:15:17.003365 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:15:17.003385 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:15:17.003402 kernel: loop: module loaded Sep 12 10:15:17.003418 kernel: fuse: init (API version 7.39) Sep 12 10:15:17.003433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:15:17.003454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 10:15:17.003470 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 10:15:17.003487 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 10:15:17.003553 systemd-journald[1270]: Collecting audit messages is disabled. Sep 12 10:15:17.003591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:15:17.003608 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 10:15:17.003624 systemd[1]: Stopped verity-setup.service. Sep 12 10:15:17.003642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:17.003658 kernel: ACPI: bus type drm_connector registered Sep 12 10:15:17.003675 systemd-journald[1270]: Journal started Sep 12 10:15:17.003711 systemd-journald[1270]: Runtime Journal (/run/log/journal/9d71a0faee78488ab0bd7691506c420c) is 8M, max 158.8M, 150.8M free. Sep 12 10:15:16.192197 systemd[1]: Queued start job for default target multi-user.target. Sep 12 10:15:16.202925 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 10:15:16.203354 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 10:15:17.012239 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:15:17.014225 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 10:15:17.018319 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 10:15:17.022013 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 10:15:17.024900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 10:15:17.028149 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 10:15:17.031246 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 10:15:17.034162 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 10:15:17.037882 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:15:17.041513 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 10:15:17.041706 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 10:15:17.045448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:15:17.045642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:15:17.050514 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:15:17.050734 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:15:17.054641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:15:17.054846 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:15:17.058510 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 10:15:17.058668 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 10:15:17.062150 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:15:17.062339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:15:17.065732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:15:17.069356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 10:15:17.073543 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 10:15:17.081408 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 10:15:17.100419 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 10:15:17.112184 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 10:15:17.124026 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 10:15:17.130072 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 10:15:17.130133 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:15:17.136474 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 10:15:17.146096 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 10:15:17.158342 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 10:15:17.163394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:15:17.207244 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 10:15:17.212472 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 10:15:17.216164 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:15:17.221078 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 10:15:17.224390 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:15:17.225750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:15:17.233335 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 10:15:17.242470 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:15:17.249442 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:15:17.253742 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 10:15:17.258364 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 10:15:17.262435 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 10:15:17.274167 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 10:15:17.283931 udevadm[1317]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 10:15:17.306794 systemd-journald[1270]: Time spent on flushing to /var/log/journal/9d71a0faee78488ab0bd7691506c420c is 18.521ms for 985 entries. Sep 12 10:15:17.306794 systemd-journald[1270]: System Journal (/var/log/journal/9d71a0faee78488ab0bd7691506c420c) is 8M, max 2.6G, 2.6G free. Sep 12 10:15:17.353496 systemd-journald[1270]: Received client request to flush runtime journal. Sep 12 10:15:17.353540 kernel: loop0: detected capacity change from 0 to 147912 Sep 12 10:15:17.318573 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 10:15:17.323437 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 10:15:17.329208 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 10:15:17.355411 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 10:15:17.395738 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 10:15:17.397785 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 10:15:17.405564 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:15:17.453741 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Sep 12 10:15:17.453767 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Sep 12 10:15:17.472648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:15:17.483131 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 10:15:17.926984 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 10:15:18.008982 kernel: loop1: detected capacity change from 0 to 138176 Sep 12 10:15:18.048520 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 10:15:18.063431 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:15:18.080699 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Sep 12 10:15:18.080726 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Sep 12 10:15:18.085009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:15:18.556969 kernel: loop2: detected capacity change from 0 to 28272 Sep 12 10:15:18.975977 kernel: loop3: detected capacity change from 0 to 229808 Sep 12 10:15:19.024976 kernel: loop4: detected capacity change from 0 to 147912 Sep 12 10:15:19.074981 kernel: loop5: detected capacity change from 0 to 138176 Sep 12 10:15:19.100982 kernel: loop6: detected capacity change from 0 to 28272 Sep 12 10:15:19.116979 kernel: loop7: detected capacity change from 0 to 229808 Sep 12 10:15:19.192566 (sd-merge)[1337]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 10:15:19.193200 (sd-merge)[1337]: Merged extensions into '/usr'. Sep 12 10:15:19.197448 systemd[1]: Reload requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 10:15:19.197469 systemd[1]: Reloading... Sep 12 10:15:19.291977 zram_generator::config[1364]: No configuration found. Sep 12 10:15:19.439111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:15:19.516607 systemd[1]: Reloading finished in 318 ms. Sep 12 10:15:19.537401 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 10:15:19.541657 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 10:15:19.560149 systemd[1]: Starting ensure-sysext.service... Sep 12 10:15:19.564177 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:15:19.572130 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:15:19.616252 systemd[1]: Reload requested from client PID 1424 ('systemctl') (unit ensure-sysext.service)... Sep 12 10:15:19.616273 systemd[1]: Reloading... Sep 12 10:15:19.617719 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 10:15:19.618653 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 10:15:19.621271 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 10:15:19.622417 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Sep 12 10:15:19.622502 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Sep 12 10:15:19.630701 systemd-udevd[1426]: Using default interface naming scheme 'v255'. Sep 12 10:15:19.660641 systemd-tmpfiles[1425]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:15:19.661034 systemd-tmpfiles[1425]: Skipping /boot Sep 12 10:15:19.684111 zram_generator::config[1456]: No configuration found. Sep 12 10:15:19.692755 systemd-tmpfiles[1425]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:15:19.692930 systemd-tmpfiles[1425]: Skipping /boot Sep 12 10:15:19.849718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:15:19.930091 systemd[1]: Reloading finished in 313 ms. Sep 12 10:15:19.958169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:15:19.974262 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:15:20.012278 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 10:15:20.017820 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 10:15:20.025071 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:15:20.036232 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 10:15:20.042461 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:20.042661 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:15:20.044052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:15:20.052041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:15:20.059294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:15:20.062607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:15:20.062912 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:15:20.063104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:20.065101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:15:20.066252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:15:20.073510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:15:20.074093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:15:20.079837 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:15:20.080032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:15:20.090216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:20.090494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:15:20.096290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:15:20.102483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:15:20.112588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:15:20.117729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:15:20.118066 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:15:20.118371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:20.120690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:15:20.120899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:15:20.126706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:15:20.126931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:15:20.131679 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:15:20.132113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:15:20.138171 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 10:15:20.150755 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 12 10:15:20.157496 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:20.157918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:15:20.164439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:15:20.169936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:15:20.175300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:15:20.189872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:15:20.199249 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:15:20.201934 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:15:20.202677 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 10:15:20.213311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 10:15:20.218797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:15:20.222332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:15:20.223098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:15:20.227282 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:15:20.227519 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:15:20.231239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:15:20.231634 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:15:20.235890 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:15:20.236148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:15:20.242826 systemd[1]: Finished ensure-sysext.service. Sep 12 10:15:20.252545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:15:20.252622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:15:20.311543 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 10:15:20.335098 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:15:20.356233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:15:20.374478 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 10:15:20.410726 augenrules[1591]: No rules Sep 12 10:15:20.414839 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:15:20.415162 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:15:20.512656 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 10:15:20.586222 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 10:15:20.602981 kernel: hv_vmbus: registering driver hv_balloon Sep 12 10:15:20.607044 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 10:15:20.624741 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 10:15:20.629967 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 10:15:20.635970 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 10:15:20.641733 kernel: Console: switching to colour dummy device 80x25 Sep 12 10:15:20.649403 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 10:15:20.656824 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 12 10:15:20.684816 systemd-networkd[1580]: lo: Link UP Sep 12 10:15:20.684833 systemd-networkd[1580]: lo: Gained carrier Sep 12 10:15:20.696069 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#42 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 10:15:20.695312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:15:20.766216 systemd-networkd[1580]: Enumeration completed Sep 12 10:15:20.776326 systemd-networkd[1580]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:15:20.776661 systemd-networkd[1580]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:15:20.784239 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:15:20.801711 systemd-resolved[1520]: Positive Trust Anchors: Sep 12 10:15:20.801734 systemd-resolved[1520]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:15:20.801809 systemd-resolved[1520]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:15:20.807943 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 10:15:20.816391 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 10:15:20.825851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:15:20.826674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:15:20.832497 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:15:20.857257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:15:20.870754 kernel: mlx5_core 8ad8:00:02.0 enP35544s1: Link up Sep 12 10:15:20.891156 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 10:15:20.878474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:15:20.878733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:15:20.887778 systemd-resolved[1520]: Using system hostname 'ci-4230.2.2-n-6349f41dc3'. Sep 12 10:15:20.888179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:15:20.899045 kernel: hv_netvsc 7ced8d47-bb4f-7ced-8d47-bb4f7ced8d47 eth0: Data path switched to VF: enP35544s1 Sep 12 10:15:20.904026 systemd-networkd[1580]: enP35544s1: Link UP Sep 12 10:15:20.908174 systemd-networkd[1580]: eth0: Link UP Sep 12 10:15:20.908502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:15:20.911753 systemd[1]: Reached target network.target - Network. Sep 12 10:15:20.914223 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:15:20.917586 systemd-networkd[1580]: eth0: Gained carrier Sep 12 10:15:20.917673 systemd-networkd[1580]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:15:20.923006 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 10:15:20.981347 systemd-networkd[1580]: enP35544s1: Gained carrier Sep 12 10:15:21.023026 systemd-networkd[1580]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 10:15:21.030022 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1584) Sep 12 10:15:21.102121 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 12 10:15:21.156784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 10:15:21.185203 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 10:15:21.228900 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 10:15:21.240130 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 10:15:21.258656 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 10:15:21.333755 lvm[1690]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:15:21.375857 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 10:15:21.381031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:15:21.391162 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 10:15:21.395880 lvm[1694]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:15:21.425854 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 10:15:21.999671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:15:22.124102 systemd-networkd[1580]: eth0: Gained IPv6LL Sep 12 10:15:22.127494 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 10:15:22.131720 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 10:15:22.648561 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 10:15:22.654233 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:15:25.972114 ldconfig[1304]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 10:15:25.983800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 10:15:25.992243 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 10:15:26.015209 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 10:15:26.018820 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:15:26.025042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 10:15:26.028522 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 10:15:26.035429 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 10:15:26.038446 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 10:15:26.042221 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 10:15:26.045549 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 10:15:26.045590 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:15:26.048127 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:15:26.077440 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 10:15:26.082813 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 10:15:26.088637 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 10:15:26.092985 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 10:15:26.096670 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 10:15:26.108728 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 10:15:26.112433 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 10:15:26.116704 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 10:15:26.119824 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:15:26.122766 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:15:26.125428 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:15:26.125465 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:15:26.146078 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 10:15:26.151104 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 10:15:26.165456 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 10:15:26.178204 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 10:15:26.186044 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 10:15:26.197169 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 10:15:26.202089 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 10:15:26.202152 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 12 10:15:26.204823 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 10:15:26.208050 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 10:15:26.210115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:15:26.217128 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 10:15:26.229365 (chronyd)[1706]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 12 10:15:26.234241 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 10:15:26.231793 KVP[1715]: KVP starting; pid is:1715 Sep 12 10:15:26.237863 jq[1713]: false Sep 12 10:15:26.243282 kernel: hv_utils: KVP IC version 4.0 Sep 12 10:15:26.243526 KVP[1715]: KVP LIC Version: 3.1 Sep 12 10:15:26.246207 chronyd[1722]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 12 10:15:26.246780 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 10:15:26.257497 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 10:15:26.265755 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 10:15:26.280179 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 10:15:26.285030 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 10:15:26.285928 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 10:15:26.288152 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 10:15:26.292940 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 10:15:26.305382 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 10:15:26.305834 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 10:15:26.311657 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 10:15:26.311923 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 10:15:26.336058 chronyd[1722]: Timezone right/UTC failed leap second check, ignoring Sep 12 10:15:26.337245 jq[1729]: true Sep 12 10:15:26.336263 chronyd[1722]: Loaded seccomp filter (level 2) Sep 12 10:15:26.340454 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 10:15:26.344760 extend-filesystems[1714]: Found loop4 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found loop5 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found loop6 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found loop7 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda1 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda2 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda3 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found usr Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda4 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda6 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda7 Sep 12 10:15:26.371429 extend-filesystems[1714]: Found sda9 Sep 12 10:15:26.371429 extend-filesystems[1714]: Checking size of /dev/sda9 Sep 12 10:15:26.364704 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 10:15:26.491714 extend-filesystems[1714]: Old size kept for /dev/sda9 Sep 12 10:15:26.491714 extend-filesystems[1714]: Found sr0 Sep 12 10:15:26.508089 tar[1734]: linux-amd64/LICENSE Sep 12 10:15:26.508089 tar[1734]: linux-amd64/helm Sep 12 10:15:26.508870 update_engine[1728]: I20250912 10:15:26.464722 1728 main.cc:92] Flatcar Update Engine starting Sep 12 10:15:26.365459 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 10:15:26.376334 (ntainerd)[1747]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 10:15:26.439587 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 10:15:26.439860 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 10:15:26.516105 jq[1746]: true Sep 12 10:15:26.478163 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 10:15:26.522863 systemd-logind[1727]: New seat seat0. Sep 12 10:15:26.523942 systemd-logind[1727]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 10:15:26.524185 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 10:15:26.537083 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1788) Sep 12 10:15:26.615484 bash[1784]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:15:26.607744 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 10:15:26.623264 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 10:15:26.643568 dbus-daemon[1712]: [system] SELinux support is enabled Sep 12 10:15:26.644171 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 10:15:26.657292 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 10:15:26.657634 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 10:15:26.661633 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 10:15:26.661659 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 10:15:26.682712 dbus-daemon[1712]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 10:15:26.694566 systemd[1]: Started update-engine.service - Update Engine. Sep 12 10:15:26.705444 update_engine[1728]: I20250912 10:15:26.705220 1728 update_check_scheduler.cc:74] Next update check in 6m20s Sep 12 10:15:26.707925 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 10:15:26.829000 coreos-metadata[1708]: Sep 12 10:15:26.828 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 10:15:26.835943 coreos-metadata[1708]: Sep 12 10:15:26.835 INFO Fetch successful Sep 12 10:15:26.835943 coreos-metadata[1708]: Sep 12 10:15:26.835 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 10:15:26.842694 coreos-metadata[1708]: Sep 12 10:15:26.841 INFO Fetch successful Sep 12 10:15:26.842694 coreos-metadata[1708]: Sep 12 10:15:26.842 INFO Fetching http://168.63.129.16/machine/50ce11d7-530b-4836-a401-a62f04c97079/d71c2a17%2D2116%2D4790%2Da2c4%2D8e24dda06fd9.%5Fci%2D4230.2.2%2Dn%2D6349f41dc3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 10:15:26.844737 coreos-metadata[1708]: Sep 12 10:15:26.844 INFO Fetch successful Sep 12 10:15:26.848653 coreos-metadata[1708]: Sep 12 10:15:26.845 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 10:15:26.864180 coreos-metadata[1708]: Sep 12 10:15:26.864 INFO Fetch successful Sep 12 10:15:26.944323 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 10:15:26.950633 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 10:15:27.112336 sshd_keygen[1767]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 10:15:27.115338 locksmithd[1843]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 10:15:27.159453 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 10:15:27.177436 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 10:15:27.193198 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 10:15:27.230197 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 10:15:27.230901 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 10:15:27.248141 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 10:15:27.255094 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 10:15:27.320229 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 10:15:27.336371 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 10:15:27.342903 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 10:15:27.347811 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 10:15:27.519266 tar[1734]: linux-amd64/README.md Sep 12 10:15:27.531162 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 10:15:27.673974 containerd[1747]: time="2025-09-12T10:15:27.673527100Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 10:15:27.705380 containerd[1747]: time="2025-09-12T10:15:27.705325100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:15:27.706918 containerd[1747]: time="2025-09-12T10:15:27.706875300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:15:27.706918 containerd[1747]: time="2025-09-12T10:15:27.706909100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 10:15:27.707077 containerd[1747]: time="2025-09-12T10:15:27.706931000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 10:15:27.707148 containerd[1747]: time="2025-09-12T10:15:27.707122300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 10:15:27.707246 containerd[1747]: time="2025-09-12T10:15:27.707151800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707287 containerd[1747]: time="2025-09-12T10:15:27.707242100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707287 containerd[1747]: time="2025-09-12T10:15:27.707261300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707509 containerd[1747]: time="2025-09-12T10:15:27.707481100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707509 containerd[1747]: time="2025-09-12T10:15:27.707502400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707599 containerd[1747]: time="2025-09-12T10:15:27.707521400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707599 containerd[1747]: time="2025-09-12T10:15:27.707534400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707673 containerd[1747]: time="2025-09-12T10:15:27.707640200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:15:27.707881 containerd[1747]: time="2025-09-12T10:15:27.707851600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:15:27.708070 containerd[1747]: time="2025-09-12T10:15:27.708045300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:15:27.708070 containerd[1747]: time="2025-09-12T10:15:27.708066400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 10:15:27.708198 containerd[1747]: time="2025-09-12T10:15:27.708177200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 10:15:27.708262 containerd[1747]: time="2025-09-12T10:15:27.708244900Z" level=info msg="metadata content store policy set" policy=shared Sep 12 10:15:27.723196 containerd[1747]: time="2025-09-12T10:15:27.723157100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 10:15:27.723292 containerd[1747]: time="2025-09-12T10:15:27.723219100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 10:15:27.723292 containerd[1747]: time="2025-09-12T10:15:27.723240400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 10:15:27.723292 containerd[1747]: time="2025-09-12T10:15:27.723261800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 10:15:27.723292 containerd[1747]: time="2025-09-12T10:15:27.723279400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723453400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723754200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723872100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723893500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723913000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723932100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723968900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.723987000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724005 containerd[1747]: time="2025-09-12T10:15:27.724007400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724027900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724043900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724059200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724075300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724101400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724120500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724137500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724156600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724184200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724205500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724221900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724240000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724258400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724353 containerd[1747]: time="2025-09-12T10:15:27.724278800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724295300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724312900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724338000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724360300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724387900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724406800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724422400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724473900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724496300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724518200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724534900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724549200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724567000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 10:15:27.724818 containerd[1747]: time="2025-09-12T10:15:27.724580500Z" level=info msg="NRI interface is disabled by configuration." Sep 12 10:15:27.725320 containerd[1747]: time="2025-09-12T10:15:27.724594800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 10:15:27.725381 containerd[1747]: time="2025-09-12T10:15:27.725114800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 10:15:27.725381 containerd[1747]: time="2025-09-12T10:15:27.725188000Z" level=info msg="Connect containerd service" Sep 12 10:15:27.725381 containerd[1747]: time="2025-09-12T10:15:27.725242500Z" level=info msg="using legacy CRI server" Sep 12 10:15:27.725381 containerd[1747]: time="2025-09-12T10:15:27.725254800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 10:15:27.725641 containerd[1747]: time="2025-09-12T10:15:27.725433600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 10:15:27.726579 containerd[1747]: time="2025-09-12T10:15:27.726534000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:15:27.727612 containerd[1747]: time="2025-09-12T10:15:27.727437900Z" level=info msg="Start subscribing containerd event" Sep 12 10:15:27.727612 containerd[1747]: time="2025-09-12T10:15:27.727499700Z" level=info msg="Start recovering state" Sep 12 10:15:27.727612 containerd[1747]: time="2025-09-12T10:15:27.727581700Z" level=info msg="Start event monitor" Sep 12 10:15:27.727612 containerd[1747]: time="2025-09-12T10:15:27.727596800Z" level=info msg="Start snapshots syncer" Sep 12 10:15:27.727612 containerd[1747]: time="2025-09-12T10:15:27.727608800Z" level=info msg="Start cni network conf syncer for default" Sep 12 10:15:27.727797 containerd[1747]: time="2025-09-12T10:15:27.727626800Z" level=info msg="Start streaming server" Sep 12 10:15:27.728632 containerd[1747]: time="2025-09-12T10:15:27.728203500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 10:15:27.728632 containerd[1747]: time="2025-09-12T10:15:27.728263700Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 10:15:27.728424 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 10:15:27.735933 containerd[1747]: time="2025-09-12T10:15:27.735897300Z" level=info msg="containerd successfully booted in 0.063721s" Sep 12 10:15:28.197112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:15:28.200942 (kubelet)[1900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:15:28.201453 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 10:15:28.204808 systemd[1]: Startup finished in 1.053s (kernel) + 11.383s (initrd) + 17.101s (userspace) = 29.537s. Sep 12 10:15:28.802885 login[1886]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 12 10:15:28.815344 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 10:15:28.827682 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 10:15:28.834155 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 10:15:28.846703 systemd-logind[1727]: New session 2 of user core. Sep 12 10:15:28.864167 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 10:15:28.871348 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 10:15:28.890924 (systemd)[1912]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 10:15:28.897322 systemd-logind[1727]: New session c1 of user core. Sep 12 10:15:28.986245 kubelet[1900]: E0912 10:15:28.986115 1900 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:15:28.988905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:15:28.989102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:15:28.989407 systemd[1]: kubelet.service: Consumed 971ms CPU time, 268.6M memory peak. Sep 12 10:15:29.267553 systemd[1912]: Queued start job for default target default.target. Sep 12 10:15:29.274032 systemd[1912]: Created slice app.slice - User Application Slice. Sep 12 10:15:29.274068 systemd[1912]: Reached target paths.target - Paths. Sep 12 10:15:29.274120 systemd[1912]: Reached target timers.target - Timers. Sep 12 10:15:29.275445 systemd[1912]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 10:15:29.286288 systemd[1912]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 10:15:29.286482 systemd[1912]: Reached target sockets.target - Sockets. Sep 12 10:15:29.286538 systemd[1912]: Reached target basic.target - Basic System. Sep 12 10:15:29.286585 systemd[1912]: Reached target default.target - Main User Target. Sep 12 10:15:29.286621 systemd[1912]: Startup finished in 376ms. Sep 12 10:15:29.286892 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 10:15:29.294131 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 10:15:29.667721 waagent[1883]: 2025-09-12T10:15:29.667613Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.669179Z INFO Daemon Daemon OS: flatcar 4230.2.2 Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.670119Z INFO Daemon Daemon Python: 3.11.11 Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.670783Z INFO Daemon Daemon Run daemon Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.671634Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.2' Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.672513Z INFO Daemon Daemon Using waagent for provisioning Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.673561Z INFO Daemon Daemon Activate resource disk Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.673913Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.679078Z INFO Daemon Daemon Found device: None Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.680959Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.681474Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.682352Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 10:15:29.687988 waagent[1883]: 2025-09-12T10:15:29.683376Z INFO Daemon Daemon Running default provisioning handler Sep 12 10:15:29.711869 waagent[1883]: 2025-09-12T10:15:29.711774Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 10:15:29.719150 waagent[1883]: 2025-09-12T10:15:29.719080Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 10:15:29.728211 waagent[1883]: 2025-09-12T10:15:29.720656Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 10:15:29.728211 waagent[1883]: 2025-09-12T10:15:29.721580Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 10:15:29.803337 login[1886]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 10:15:29.807863 systemd-logind[1727]: New session 1 of user core. Sep 12 10:15:29.819115 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 10:15:29.884858 waagent[1883]: 2025-09-12T10:15:29.882202Z INFO Daemon Daemon Successfully mounted dvd Sep 12 10:15:29.907857 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 10:15:29.910738 waagent[1883]: 2025-09-12T10:15:29.910657Z INFO Daemon Daemon Detect protocol endpoint Sep 12 10:15:29.927436 waagent[1883]: 2025-09-12T10:15:29.912266Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 10:15:29.927436 waagent[1883]: 2025-09-12T10:15:29.913210Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 10:15:29.927436 waagent[1883]: 2025-09-12T10:15:29.913619Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 10:15:29.927436 waagent[1883]: 2025-09-12T10:15:29.914870Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 10:15:29.927436 waagent[1883]: 2025-09-12T10:15:29.915729Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 10:15:29.949591 waagent[1883]: 2025-09-12T10:15:29.949521Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 10:15:29.958168 waagent[1883]: 2025-09-12T10:15:29.951127Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 10:15:29.958168 waagent[1883]: 2025-09-12T10:15:29.951936Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 10:15:30.030744 waagent[1883]: 2025-09-12T10:15:30.030632Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 10:15:30.034289 waagent[1883]: 2025-09-12T10:15:30.034208Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 10:15:30.040636 waagent[1883]: 2025-09-12T10:15:30.040578Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 10:15:30.057541 waagent[1883]: 2025-09-12T10:15:30.057473Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 10:15:30.075782 waagent[1883]: 2025-09-12T10:15:30.059538Z INFO Daemon Sep 12 10:15:30.075782 waagent[1883]: 2025-09-12T10:15:30.059702Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8e8477f5-c14c-422c-b6fd-8bec17c479b3 eTag: 1607805290349648687 source: Fabric] Sep 12 10:15:30.075782 waagent[1883]: 2025-09-12T10:15:30.061088Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 10:15:30.075782 waagent[1883]: 2025-09-12T10:15:30.061794Z INFO Daemon Sep 12 10:15:30.075782 waagent[1883]: 2025-09-12T10:15:30.062825Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 10:15:30.078946 waagent[1883]: 2025-09-12T10:15:30.078889Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 10:15:30.151874 waagent[1883]: 2025-09-12T10:15:30.151786Z INFO Daemon Downloaded certificate {'thumbprint': 'FD8A9A76A2FBC8E69B81A313D5701A2E23139523', 'hasPrivateKey': True} Sep 12 10:15:30.157632 waagent[1883]: 2025-09-12T10:15:30.157564Z INFO Daemon Fetch goal state completed Sep 12 10:15:30.165945 waagent[1883]: 2025-09-12T10:15:30.165891Z INFO Daemon Daemon Starting provisioning Sep 12 10:15:30.173502 waagent[1883]: 2025-09-12T10:15:30.167410Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 10:15:30.173502 waagent[1883]: 2025-09-12T10:15:30.168485Z INFO Daemon Daemon Set hostname [ci-4230.2.2-n-6349f41dc3] Sep 12 10:15:30.192999 waagent[1883]: 2025-09-12T10:15:30.192896Z INFO Daemon Daemon Publish hostname [ci-4230.2.2-n-6349f41dc3] Sep 12 10:15:30.201874 waagent[1883]: 2025-09-12T10:15:30.194584Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 10:15:30.201874 waagent[1883]: 2025-09-12T10:15:30.195096Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 10:15:30.216372 systemd-networkd[1580]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:15:30.216383 systemd-networkd[1580]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:15:30.216432 systemd-networkd[1580]: eth0: DHCP lease lost Sep 12 10:15:30.217644 waagent[1883]: 2025-09-12T10:15:30.217579Z INFO Daemon Daemon Create user account if not exists Sep 12 10:15:30.220878 waagent[1883]: 2025-09-12T10:15:30.220817Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 10:15:30.223964 waagent[1883]: 2025-09-12T10:15:30.222712Z INFO Daemon Daemon Configure sudoer Sep 12 10:15:30.224267 waagent[1883]: 2025-09-12T10:15:30.224220Z INFO Daemon Daemon Configure sshd Sep 12 10:15:30.225692 waagent[1883]: 2025-09-12T10:15:30.225647Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 10:15:30.226652 waagent[1883]: 2025-09-12T10:15:30.226609Z INFO Daemon Daemon Deploy ssh public key. Sep 12 10:15:30.268997 systemd-networkd[1580]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 10:15:31.397986 waagent[1883]: 2025-09-12T10:15:31.397906Z INFO Daemon Daemon Provisioning complete Sep 12 10:15:31.412188 waagent[1883]: 2025-09-12T10:15:31.412120Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 10:15:31.415765 waagent[1883]: 2025-09-12T10:15:31.415692Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 10:15:31.420777 waagent[1883]: 2025-09-12T10:15:31.420712Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 12 10:15:31.548296 waagent[1963]: 2025-09-12T10:15:31.548197Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 12 10:15:31.548760 waagent[1963]: 2025-09-12T10:15:31.548371Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.2 Sep 12 10:15:31.548760 waagent[1963]: 2025-09-12T10:15:31.548454Z INFO ExtHandler ExtHandler Python: 3.11.11 Sep 12 10:15:32.166198 waagent[1963]: 2025-09-12T10:15:32.166016Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 12 10:15:32.166529 waagent[1963]: 2025-09-12T10:15:32.166456Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 10:15:32.166629 waagent[1963]: 2025-09-12T10:15:32.166582Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 10:15:32.174126 waagent[1963]: 2025-09-12T10:15:32.174062Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 10:15:32.179767 waagent[1963]: 2025-09-12T10:15:32.179715Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 10:15:32.180248 waagent[1963]: 2025-09-12T10:15:32.180191Z INFO ExtHandler Sep 12 10:15:32.180332 waagent[1963]: 2025-09-12T10:15:32.180288Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c7941e1d-fafc-4090-96c5-042ef54c4ccd eTag: 1607805290349648687 source: Fabric] Sep 12 10:15:32.180649 waagent[1963]: 2025-09-12T10:15:32.180596Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 10:15:32.193526 waagent[1963]: 2025-09-12T10:15:32.193447Z INFO ExtHandler Sep 12 10:15:32.193616 waagent[1963]: 2025-09-12T10:15:32.193586Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 10:15:32.197869 waagent[1963]: 2025-09-12T10:15:32.197822Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 10:15:32.302587 waagent[1963]: 2025-09-12T10:15:32.302487Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FD8A9A76A2FBC8E69B81A313D5701A2E23139523', 'hasPrivateKey': True} Sep 12 10:15:32.303184 waagent[1963]: 2025-09-12T10:15:32.303120Z INFO ExtHandler Fetch goal state completed Sep 12 10:15:32.317135 waagent[1963]: 2025-09-12T10:15:32.317051Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1963 Sep 12 10:15:32.317307 waagent[1963]: 2025-09-12T10:15:32.317253Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 10:15:32.318887 waagent[1963]: 2025-09-12T10:15:32.318824Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.2', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 10:15:32.319266 waagent[1963]: 2025-09-12T10:15:32.319213Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 10:15:32.455936 waagent[1963]: 2025-09-12T10:15:32.455830Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 10:15:32.456174 waagent[1963]: 2025-09-12T10:15:32.456115Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 10:15:32.462680 waagent[1963]: 2025-09-12T10:15:32.462525Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 10:15:32.473153 systemd[1]: Reload requested from client PID 1978 ('systemctl') (unit waagent.service)... Sep 12 10:15:32.473173 systemd[1]: Reloading... Sep 12 10:15:32.574980 zram_generator::config[2013]: No configuration found. Sep 12 10:15:32.713395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:15:32.824233 systemd[1]: Reloading finished in 350 ms. Sep 12 10:15:32.844431 waagent[1963]: 2025-09-12T10:15:32.842028Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 12 10:15:32.852121 systemd[1]: Reload requested from client PID 2074 ('systemctl') (unit waagent.service)... Sep 12 10:15:32.852139 systemd[1]: Reloading... Sep 12 10:15:32.962988 zram_generator::config[2116]: No configuration found. Sep 12 10:15:33.089031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:15:33.200743 systemd[1]: Reloading finished in 348 ms. Sep 12 10:15:33.219430 waagent[1963]: 2025-09-12T10:15:33.217102Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 10:15:33.219430 waagent[1963]: 2025-09-12T10:15:33.217324Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 10:15:33.571322 waagent[1963]: 2025-09-12T10:15:33.571229Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 10:15:33.571899 waagent[1963]: 2025-09-12T10:15:33.571826Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 12 10:15:33.572704 waagent[1963]: 2025-09-12T10:15:33.572632Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 10:15:33.572839 waagent[1963]: 2025-09-12T10:15:33.572790Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 10:15:33.573016 waagent[1963]: 2025-09-12T10:15:33.572945Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 10:15:33.573612 waagent[1963]: 2025-09-12T10:15:33.573421Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 10:15:33.573791 waagent[1963]: 2025-09-12T10:15:33.573743Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 10:15:33.574214 waagent[1963]: 2025-09-12T10:15:33.574169Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 10:15:33.574358 waagent[1963]: 2025-09-12T10:15:33.574296Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 10:15:33.574358 waagent[1963]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 10:15:33.574358 waagent[1963]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 10:15:33.574358 waagent[1963]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 10:15:33.574358 waagent[1963]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 10:15:33.574358 waagent[1963]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 10:15:33.574358 waagent[1963]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 10:15:33.574627 waagent[1963]: 2025-09-12T10:15:33.574420Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 10:15:33.574678 waagent[1963]: 2025-09-12T10:15:33.574599Z INFO EnvHandler ExtHandler Configure routes Sep 12 10:15:33.574719 waagent[1963]: 2025-09-12T10:15:33.574686Z INFO EnvHandler ExtHandler Gateway:None Sep 12 10:15:33.574795 waagent[1963]: 2025-09-12T10:15:33.574756Z INFO EnvHandler ExtHandler Routes:None Sep 12 10:15:33.577021 waagent[1963]: 2025-09-12T10:15:33.575562Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 10:15:33.577021 waagent[1963]: 2025-09-12T10:15:33.575711Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 10:15:33.577021 waagent[1963]: 2025-09-12T10:15:33.576083Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 10:15:33.577021 waagent[1963]: 2025-09-12T10:15:33.576010Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 10:15:33.577209 waagent[1963]: 2025-09-12T10:15:33.577070Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 10:15:33.585545 waagent[1963]: 2025-09-12T10:15:33.585500Z INFO ExtHandler ExtHandler Sep 12 10:15:33.585747 waagent[1963]: 2025-09-12T10:15:33.585704Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ab057904-aba6-42c1-92a2-979501783954 correlation 9e9b2e77-0f0e-4f7d-9600-42058cddedc5 created: 2025-09-12T10:14:22.775002Z] Sep 12 10:15:33.586382 waagent[1963]: 2025-09-12T10:15:33.586329Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 10:15:33.588001 waagent[1963]: 2025-09-12T10:15:33.587928Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Sep 12 10:15:33.621582 waagent[1963]: 2025-09-12T10:15:33.621532Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5F8A1C83-641F-4393-8798-143D4E8CA9F7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 12 10:15:33.652398 waagent[1963]: 2025-09-12T10:15:33.652314Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 10:15:33.652398 waagent[1963]: Executing ['ip', '-a', '-o', 'link']: Sep 12 10:15:33.652398 waagent[1963]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 10:15:33.652398 waagent[1963]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:47:bb:4f brd ff:ff:ff:ff:ff:ff Sep 12 10:15:33.652398 waagent[1963]: 3: enP35544s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:47:bb:4f brd ff:ff:ff:ff:ff:ff\ altname enP35544p0s2 Sep 12 10:15:33.652398 waagent[1963]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 10:15:33.652398 waagent[1963]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 10:15:33.652398 waagent[1963]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 10:15:33.652398 waagent[1963]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 10:15:33.652398 waagent[1963]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 10:15:33.652398 waagent[1963]: 2: eth0 inet6 fe80::7eed:8dff:fe47:bb4f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 10:15:33.736730 waagent[1963]: 2025-09-12T10:15:33.736660Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 12 10:15:33.736730 waagent[1963]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 10:15:33.736730 waagent[1963]: pkts bytes target prot opt in out source destination Sep 12 10:15:33.736730 waagent[1963]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 10:15:33.736730 waagent[1963]: pkts bytes target prot opt in out source destination Sep 12 10:15:33.736730 waagent[1963]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 10:15:33.736730 waagent[1963]: pkts bytes target prot opt in out source destination Sep 12 10:15:33.736730 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 10:15:33.736730 waagent[1963]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 10:15:33.736730 waagent[1963]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 10:15:33.740094 waagent[1963]: 2025-09-12T10:15:33.740034Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 10:15:33.740094 waagent[1963]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 10:15:33.740094 waagent[1963]: pkts bytes target prot opt in out source destination Sep 12 10:15:33.740094 waagent[1963]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 10:15:33.740094 waagent[1963]: pkts bytes target prot opt in out source destination Sep 12 10:15:33.740094 waagent[1963]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 10:15:33.740094 waagent[1963]: pkts bytes target prot opt in out source destination Sep 12 10:15:33.740094 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 10:15:33.740094 waagent[1963]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 10:15:33.740094 waagent[1963]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 10:15:33.740481 waagent[1963]: 2025-09-12T10:15:33.740346Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 10:15:39.094110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 10:15:39.099200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:15:39.216053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:15:39.224279 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:15:39.738441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 10:15:39.744279 systemd[1]: Started sshd@0-10.200.8.13:22-10.200.16.10:55492.service - OpenSSH per-connection server daemon (10.200.16.10:55492). Sep 12 10:15:39.912884 kubelet[2209]: E0912 10:15:39.912832 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:15:39.916759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:15:39.916968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:15:39.917415 systemd[1]: kubelet.service: Consumed 145ms CPU time, 108.7M memory peak. Sep 12 10:15:40.659560 sshd[2216]: Accepted publickey for core from 10.200.16.10 port 55492 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:15:40.661102 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:15:40.666932 systemd-logind[1727]: New session 3 of user core. Sep 12 10:15:40.673139 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 10:15:41.210278 systemd[1]: Started sshd@1-10.200.8.13:22-10.200.16.10:41288.service - OpenSSH per-connection server daemon (10.200.16.10:41288). Sep 12 10:15:41.850970 sshd[2222]: Accepted publickey for core from 10.200.16.10 port 41288 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:15:41.852400 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:15:41.857187 systemd-logind[1727]: New session 4 of user core. Sep 12 10:15:41.868167 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 10:15:42.294748 sshd[2224]: Connection closed by 10.200.16.10 port 41288 Sep 12 10:15:42.295827 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Sep 12 10:15:42.298767 systemd[1]: sshd@1-10.200.8.13:22-10.200.16.10:41288.service: Deactivated successfully. Sep 12 10:15:42.300822 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 10:15:42.302587 systemd-logind[1727]: Session 4 logged out. Waiting for processes to exit. Sep 12 10:15:42.303533 systemd-logind[1727]: Removed session 4. Sep 12 10:15:42.410576 systemd[1]: Started sshd@2-10.200.8.13:22-10.200.16.10:41294.service - OpenSSH per-connection server daemon (10.200.16.10:41294). Sep 12 10:15:43.032375 sshd[2230]: Accepted publickey for core from 10.200.16.10 port 41294 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:15:43.035275 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:15:43.040291 systemd-logind[1727]: New session 5 of user core. Sep 12 10:15:43.049126 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 10:15:43.473473 sshd[2232]: Connection closed by 10.200.16.10 port 41294 Sep 12 10:15:43.474221 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Sep 12 10:15:43.477222 systemd[1]: sshd@2-10.200.8.13:22-10.200.16.10:41294.service: Deactivated successfully. Sep 12 10:15:43.479229 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 10:15:43.480669 systemd-logind[1727]: Session 5 logged out. Waiting for processes to exit. Sep 12 10:15:43.481716 systemd-logind[1727]: Removed session 5. Sep 12 10:15:43.588278 systemd[1]: Started sshd@3-10.200.8.13:22-10.200.16.10:41304.service - OpenSSH per-connection server daemon (10.200.16.10:41304). Sep 12 10:15:44.209972 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 41304 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:15:44.211724 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:15:44.216611 systemd-logind[1727]: New session 6 of user core. Sep 12 10:15:44.228116 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 10:15:44.651428 sshd[2240]: Connection closed by 10.200.16.10 port 41304 Sep 12 10:15:44.652432 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Sep 12 10:15:44.655395 systemd[1]: sshd@3-10.200.8.13:22-10.200.16.10:41304.service: Deactivated successfully. Sep 12 10:15:44.657491 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 10:15:44.659227 systemd-logind[1727]: Session 6 logged out. Waiting for processes to exit. Sep 12 10:15:44.660154 systemd-logind[1727]: Removed session 6. Sep 12 10:15:44.766535 systemd[1]: Started sshd@4-10.200.8.13:22-10.200.16.10:41310.service - OpenSSH per-connection server daemon (10.200.16.10:41310). Sep 12 10:15:45.387661 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 41310 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:15:45.389110 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:15:45.394609 systemd-logind[1727]: New session 7 of user core. Sep 12 10:15:45.404164 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 10:15:45.982725 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 10:15:45.983109 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:15:46.007372 sudo[2249]: pam_unix(sudo:session): session closed for user root Sep 12 10:15:46.107538 sshd[2248]: Connection closed by 10.200.16.10 port 41310 Sep 12 10:15:46.108616 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Sep 12 10:15:46.111681 systemd[1]: sshd@4-10.200.8.13:22-10.200.16.10:41310.service: Deactivated successfully. Sep 12 10:15:46.113760 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 10:15:46.115511 systemd-logind[1727]: Session 7 logged out. Waiting for processes to exit. Sep 12 10:15:46.116453 systemd-logind[1727]: Removed session 7. Sep 12 10:15:46.222272 systemd[1]: Started sshd@5-10.200.8.13:22-10.200.16.10:41318.service - OpenSSH per-connection server daemon (10.200.16.10:41318). Sep 12 10:15:46.844377 sshd[2255]: Accepted publickey for core from 10.200.16.10 port 41318 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:15:46.845838 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:15:46.850005 systemd-logind[1727]: New session 8 of user core. Sep 12 10:15:46.856128 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 10:15:47.188307 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 10:15:47.188664 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:15:47.192093 sudo[2259]: pam_unix(sudo:session): session closed for user root Sep 12 10:15:47.197307 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 10:15:47.197648 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:15:47.210444 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:15:47.237487 augenrules[2281]: No rules Sep 12 10:15:47.239029 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:15:47.239306 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:15:47.240672 sudo[2258]: pam_unix(sudo:session): session closed for user root Sep 12 10:15:47.340914 sshd[2257]: Connection closed by 10.200.16.10 port 41318 Sep 12 10:15:47.341700 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Sep 12 10:15:47.344631 systemd[1]: sshd@5-10.200.8.13:22-10.200.16.10:41318.service: Deactivated successfully. Sep 12 10:15:47.346627 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 10:15:47.348081 systemd-logind[1727]: Session 8 logged out. Waiting for processes to exit. Sep 12 10:15:47.349268 systemd-logind[1727]: Removed session 8. Sep 12 10:15:47.456281 systemd[1]: Started sshd@6-10.200.8.13:22-10.200.16.10:41334.service - OpenSSH per-connection server daemon (10.200.16.10:41334). Sep 12 10:15:48.081534 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 41334 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:15:48.084368 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:15:48.089790 systemd-logind[1727]: New session 9 of user core. Sep 12 10:15:48.095127 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 10:15:48.425238 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 10:15:48.425614 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:15:49.977041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 10:15:49.984390 (dockerd)[2311]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 10:15:49.984816 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 10:15:49.986425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:15:50.125537 chronyd[1722]: Selected source PHC0 Sep 12 10:15:50.166558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:15:50.171005 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:15:50.820154 kubelet[2320]: E0912 10:15:50.820006 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:15:50.823410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:15:50.823618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:15:50.823983 systemd[1]: kubelet.service: Consumed 165ms CPU time, 108.8M memory peak. Sep 12 10:15:52.641990 dockerd[2311]: time="2025-09-12T10:15:52.640789066Z" level=info msg="Starting up" Sep 12 10:15:53.012093 systemd[1]: var-lib-docker-metacopy\x2dcheck173847225-merged.mount: Deactivated successfully. Sep 12 10:15:53.069771 dockerd[2311]: time="2025-09-12T10:15:53.069720166Z" level=info msg="Loading containers: start." Sep 12 10:15:53.330017 kernel: Initializing XFRM netlink socket Sep 12 10:15:53.496009 systemd-networkd[1580]: docker0: Link UP Sep 12 10:15:53.584213 dockerd[2311]: time="2025-09-12T10:15:53.584162466Z" level=info msg="Loading containers: done." Sep 12 10:15:53.605182 dockerd[2311]: time="2025-09-12T10:15:53.605133166Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 10:15:53.605364 dockerd[2311]: time="2025-09-12T10:15:53.605248866Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 10:15:53.605421 dockerd[2311]: time="2025-09-12T10:15:53.605386966Z" level=info msg="Daemon has completed initialization" Sep 12 10:15:53.658739 dockerd[2311]: time="2025-09-12T10:15:53.658493066Z" level=info msg="API listen on /run/docker.sock" Sep 12 10:15:53.658622 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 10:15:54.887193 containerd[1747]: time="2025-09-12T10:15:54.887152766Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 10:15:55.814180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275896686.mount: Deactivated successfully. Sep 12 10:15:57.696808 containerd[1747]: time="2025-09-12T10:15:57.696755066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:15:57.699797 containerd[1747]: time="2025-09-12T10:15:57.699751266Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Sep 12 10:15:57.704787 containerd[1747]: time="2025-09-12T10:15:57.704729666Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:15:57.715226 containerd[1747]: time="2025-09-12T10:15:57.714885766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:15:57.716021 containerd[1747]: time="2025-09-12T10:15:57.715983866Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.8287904s" Sep 12 10:15:57.716106 containerd[1747]: time="2025-09-12T10:15:57.716029166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 10:15:57.716940 containerd[1747]: time="2025-09-12T10:15:57.716911166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 10:15:59.593598 containerd[1747]: time="2025-09-12T10:15:59.593538493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:15:59.596781 containerd[1747]: time="2025-09-12T10:15:59.596515420Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Sep 12 10:15:59.599378 containerd[1747]: time="2025-09-12T10:15:59.599335336Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:15:59.603855 containerd[1747]: time="2025-09-12T10:15:59.603803177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:15:59.605031 containerd[1747]: time="2025-09-12T10:15:59.604997669Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.888053102s" Sep 12 10:15:59.605111 containerd[1747]: time="2025-09-12T10:15:59.605033171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 10:15:59.605926 containerd[1747]: time="2025-09-12T10:15:59.605886936Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 10:16:00.844441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 10:16:00.855754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:01.015150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:01.024351 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:16:01.618907 kubelet[2585]: E0912 10:16:01.618773 2585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:16:01.621443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:16:01.621646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:16:01.622209 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.1M memory peak. Sep 12 10:16:01.627495 containerd[1747]: time="2025-09-12T10:16:01.627453817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:01.630100 containerd[1747]: time="2025-09-12T10:16:01.629880753Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Sep 12 10:16:01.634105 containerd[1747]: time="2025-09-12T10:16:01.634055015Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:01.642351 containerd[1747]: time="2025-09-12T10:16:01.642285937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:01.643821 containerd[1747]: time="2025-09-12T10:16:01.643303052Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.037382914s" Sep 12 10:16:01.643821 containerd[1747]: time="2025-09-12T10:16:01.643343652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 10:16:01.644200 containerd[1747]: time="2025-09-12T10:16:01.644177165Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 10:16:02.820806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227431950.mount: Deactivated successfully. Sep 12 10:16:03.384892 containerd[1747]: time="2025-09-12T10:16:03.384834590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:03.387610 containerd[1747]: time="2025-09-12T10:16:03.387428028Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Sep 12 10:16:03.392975 containerd[1747]: time="2025-09-12T10:16:03.392897509Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:03.397418 containerd[1747]: time="2025-09-12T10:16:03.396692265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:03.397418 containerd[1747]: time="2025-09-12T10:16:03.397267174Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.752931406s" Sep 12 10:16:03.397418 containerd[1747]: time="2025-09-12T10:16:03.397301074Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 10:16:03.397975 containerd[1747]: time="2025-09-12T10:16:03.397885883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 10:16:03.957712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247999475.mount: Deactivated successfully. Sep 12 10:16:05.330905 containerd[1747]: time="2025-09-12T10:16:05.330845450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:05.334264 containerd[1747]: time="2025-09-12T10:16:05.334194799Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Sep 12 10:16:05.337647 containerd[1747]: time="2025-09-12T10:16:05.337585649Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:05.343386 containerd[1747]: time="2025-09-12T10:16:05.343318434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:05.344620 containerd[1747]: time="2025-09-12T10:16:05.344456351Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.946532568s" Sep 12 10:16:05.344620 containerd[1747]: time="2025-09-12T10:16:05.344500352Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 10:16:05.345389 containerd[1747]: time="2025-09-12T10:16:05.345354864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 10:16:05.894629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423716282.mount: Deactivated successfully. Sep 12 10:16:05.916555 containerd[1747]: time="2025-09-12T10:16:05.916498605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:05.920251 containerd[1747]: time="2025-09-12T10:16:05.920187060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 12 10:16:05.923086 containerd[1747]: time="2025-09-12T10:16:05.923031302Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:05.927669 containerd[1747]: time="2025-09-12T10:16:05.927620570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:05.928629 containerd[1747]: time="2025-09-12T10:16:05.928371781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 582.952615ms" Sep 12 10:16:05.928629 containerd[1747]: time="2025-09-12T10:16:05.928409181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 10:16:05.929386 containerd[1747]: time="2025-09-12T10:16:05.929338395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 10:16:06.564522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount803649069.mount: Deactivated successfully. Sep 12 10:16:08.705043 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 12 10:16:08.837823 containerd[1747]: time="2025-09-12T10:16:08.837764638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:08.840137 containerd[1747]: time="2025-09-12T10:16:08.840081279Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Sep 12 10:16:08.843000 containerd[1747]: time="2025-09-12T10:16:08.842928429Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:08.847488 containerd[1747]: time="2025-09-12T10:16:08.847437309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:08.848709 containerd[1747]: time="2025-09-12T10:16:08.848525328Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.919153633s" Sep 12 10:16:08.848709 containerd[1747]: time="2025-09-12T10:16:08.848563729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 10:16:11.672226 update_engine[1728]: I20250912 10:16:11.671054 1728 update_attempter.cc:509] Updating boot flags... Sep 12 10:16:11.704053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 10:16:11.716171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:11.795023 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2750) Sep 12 10:16:12.528207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:12.539810 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:16:12.618210 kubelet[2802]: E0912 10:16:12.613887 2802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:16:12.626871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:16:12.627083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:16:12.627465 systemd[1]: kubelet.service: Consumed 188ms CPU time, 110.3M memory peak. Sep 12 10:16:12.641976 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2753) Sep 12 10:16:13.097872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:13.098557 systemd[1]: kubelet.service: Consumed 188ms CPU time, 110.3M memory peak. Sep 12 10:16:13.107332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:13.141841 systemd[1]: Reload requested from client PID 2868 ('systemctl') (unit session-9.scope)... Sep 12 10:16:13.141862 systemd[1]: Reloading... Sep 12 10:16:13.255979 zram_generator::config[2915]: No configuration found. Sep 12 10:16:13.420173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:16:13.551443 systemd[1]: Reloading finished in 409 ms. Sep 12 10:16:13.609804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:13.615039 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:13.616319 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:16:13.616510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:13.616550 systemd[1]: kubelet.service: Consumed 135ms CPU time, 98.5M memory peak. Sep 12 10:16:13.621230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:14.620537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:14.627441 (kubelet)[2987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:16:14.671884 kubelet[2987]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:16:14.672411 kubelet[2987]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:16:14.672411 kubelet[2987]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:16:14.674527 kubelet[2987]: I0912 10:16:14.673962 2987 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:16:15.283118 kubelet[2987]: I0912 10:16:15.283073 2987 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 10:16:15.283118 kubelet[2987]: I0912 10:16:15.283104 2987 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:16:15.283435 kubelet[2987]: I0912 10:16:15.283411 2987 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 10:16:15.313035 kubelet[2987]: E0912 10:16:15.312991 2987 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 10:16:15.315866 kubelet[2987]: I0912 10:16:15.315600 2987 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:16:15.321179 kubelet[2987]: E0912 10:16:15.321143 2987 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:16:15.321179 kubelet[2987]: I0912 10:16:15.321169 2987 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:16:15.325275 kubelet[2987]: I0912 10:16:15.325251 2987 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:16:15.325573 kubelet[2987]: I0912 10:16:15.325534 2987 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:16:15.325754 kubelet[2987]: I0912 10:16:15.325572 2987 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-6349f41dc3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:16:15.325905 kubelet[2987]: I0912 10:16:15.325765 2987 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:16:15.325905 kubelet[2987]: I0912 10:16:15.325781 2987 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 10:16:15.326840 kubelet[2987]: I0912 10:16:15.326815 2987 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:16:15.329024 kubelet[2987]: I0912 10:16:15.329001 2987 kubelet.go:480] "Attempting to sync node with API server" Sep 12 10:16:15.329024 kubelet[2987]: I0912 10:16:15.329026 2987 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:16:15.329929 kubelet[2987]: I0912 10:16:15.329675 2987 kubelet.go:386] "Adding apiserver pod source" Sep 12 10:16:15.329929 kubelet[2987]: I0912 10:16:15.329704 2987 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:16:15.337713 kubelet[2987]: E0912 10:16:15.337303 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-6349f41dc3&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 10:16:15.340746 kubelet[2987]: E0912 10:16:15.340702 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:16:15.340837 kubelet[2987]: I0912 10:16:15.340805 2987 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:16:15.341289 kubelet[2987]: I0912 10:16:15.341263 2987 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 10:16:15.342626 kubelet[2987]: W0912 10:16:15.342596 2987 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 10:16:15.347388 kubelet[2987]: I0912 10:16:15.346580 2987 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:16:15.347388 kubelet[2987]: I0912 10:16:15.346645 2987 server.go:1289] "Started kubelet" Sep 12 10:16:15.352509 kubelet[2987]: I0912 10:16:15.352479 2987 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:16:15.354372 kubelet[2987]: E0912 10:16:15.353053 2987 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-6349f41dc3.18648187eb1f5ae5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-6349f41dc3,UID:ci-4230.2.2-n-6349f41dc3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-6349f41dc3,},FirstTimestamp:2025-09-12 10:16:15.346604773 +0000 UTC m=+0.715108351,LastTimestamp:2025-09-12 10:16:15.346604773 +0000 UTC m=+0.715108351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-6349f41dc3,}" Sep 12 10:16:15.358709 kubelet[2987]: I0912 10:16:15.358654 2987 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:16:15.360974 kubelet[2987]: I0912 10:16:15.359976 2987 server.go:317] "Adding debug handlers to kubelet server" Sep 12 10:16:15.365273 kubelet[2987]: I0912 10:16:15.365249 2987 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:16:15.365521 kubelet[2987]: E0912 10:16:15.365494 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:15.365844 kubelet[2987]: I0912 10:16:15.365821 2987 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:16:15.365907 kubelet[2987]: I0912 10:16:15.365893 2987 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:16:15.367773 kubelet[2987]: E0912 10:16:15.366868 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 10:16:15.368082 kubelet[2987]: I0912 10:16:15.368032 2987 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:16:15.368162 kubelet[2987]: E0912 10:16:15.368062 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-6349f41dc3?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="200ms" Sep 12 10:16:15.368298 kubelet[2987]: I0912 10:16:15.368280 2987 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:16:15.368570 kubelet[2987]: I0912 10:16:15.368549 2987 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:16:15.371738 kubelet[2987]: E0912 10:16:15.371711 2987 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:16:15.374627 kubelet[2987]: I0912 10:16:15.374607 2987 factory.go:223] Registration of the containerd container factory successfully Sep 12 10:16:15.374781 kubelet[2987]: I0912 10:16:15.374769 2987 factory.go:223] Registration of the systemd container factory successfully Sep 12 10:16:15.375004 kubelet[2987]: I0912 10:16:15.374931 2987 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:16:15.392040 kubelet[2987]: I0912 10:16:15.392018 2987 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:16:15.392171 kubelet[2987]: I0912 10:16:15.392160 2987 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:16:15.392251 kubelet[2987]: I0912 10:16:15.392241 2987 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:16:15.399150 kubelet[2987]: I0912 10:16:15.399133 2987 policy_none.go:49] "None policy: Start" Sep 12 10:16:15.399239 kubelet[2987]: I0912 10:16:15.399232 2987 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:16:15.399280 kubelet[2987]: I0912 10:16:15.399275 2987 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:16:15.411119 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 10:16:15.420436 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 10:16:15.424366 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 10:16:15.432766 kubelet[2987]: E0912 10:16:15.432678 2987 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 10:16:15.434100 kubelet[2987]: I0912 10:16:15.433431 2987 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:16:15.434100 kubelet[2987]: I0912 10:16:15.433455 2987 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:16:15.434100 kubelet[2987]: I0912 10:16:15.433813 2987 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:16:15.437460 kubelet[2987]: E0912 10:16:15.437435 2987 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:16:15.437608 kubelet[2987]: E0912 10:16:15.437588 2987 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:15.438247 kubelet[2987]: I0912 10:16:15.438193 2987 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 10:16:15.442292 kubelet[2987]: I0912 10:16:15.442246 2987 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 10:16:15.442292 kubelet[2987]: I0912 10:16:15.442270 2987 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 10:16:15.442443 kubelet[2987]: I0912 10:16:15.442364 2987 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:16:15.442443 kubelet[2987]: I0912 10:16:15.442375 2987 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 10:16:15.443148 kubelet[2987]: E0912 10:16:15.443126 2987 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 12 10:16:15.445613 kubelet[2987]: E0912 10:16:15.445420 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 10:16:15.536788 kubelet[2987]: I0912 10:16:15.535619 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.536788 kubelet[2987]: E0912 10:16:15.536725 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.567216 kubelet[2987]: I0912 10:16:15.567175 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c533fb2ab02a94bd7a3a036854ada4f3-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" (UID: \"c533fb2ab02a94bd7a3a036854ada4f3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.567380 kubelet[2987]: I0912 10:16:15.567262 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c533fb2ab02a94bd7a3a036854ada4f3-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" (UID: \"c533fb2ab02a94bd7a3a036854ada4f3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.567380 kubelet[2987]: I0912 10:16:15.567287 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c533fb2ab02a94bd7a3a036854ada4f3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" (UID: \"c533fb2ab02a94bd7a3a036854ada4f3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.569590 kubelet[2987]: E0912 10:16:15.569551 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-6349f41dc3?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="400ms" Sep 12 10:16:15.738753 kubelet[2987]: I0912 10:16:15.738714 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.739700 kubelet[2987]: E0912 10:16:15.739657 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.909206 systemd[1]: Created slice kubepods-burstable-podc533fb2ab02a94bd7a3a036854ada4f3.slice - libcontainer container kubepods-burstable-podc533fb2ab02a94bd7a3a036854ada4f3.slice. Sep 12 10:16:15.920601 kubelet[2987]: E0912 10:16:15.920571 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.921512 containerd[1747]: time="2025-09-12T10:16:15.921454081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-6349f41dc3,Uid:c533fb2ab02a94bd7a3a036854ada4f3,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:15.926104 systemd[1]: Created slice kubepods-burstable-pod00ebe07ab505ed0a04c75b1c3901f862.slice - libcontainer container kubepods-burstable-pod00ebe07ab505ed0a04c75b1c3901f862.slice. Sep 12 10:16:15.928335 kubelet[2987]: E0912 10:16:15.928311 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.963627 systemd[1]: Created slice kubepods-burstable-podb3733604dc2beb98e77466ba8bcf80b9.slice - libcontainer container kubepods-burstable-podb3733604dc2beb98e77466ba8bcf80b9.slice. Sep 12 10:16:15.966266 kubelet[2987]: E0912 10:16:15.966234 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.970596 kubelet[2987]: I0912 10:16:15.970566 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.970715 kubelet[2987]: I0912 10:16:15.970614 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3733604dc2beb98e77466ba8bcf80b9-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-6349f41dc3\" (UID: \"b3733604dc2beb98e77466ba8bcf80b9\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.970715 kubelet[2987]: I0912 10:16:15.970652 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.970715 kubelet[2987]: I0912 10:16:15.970683 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.970830 kubelet[2987]: I0912 10:16:15.970715 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.970830 kubelet[2987]: I0912 10:16:15.970746 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:15.971500 kubelet[2987]: E0912 10:16:15.971450 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-6349f41dc3?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="800ms" Sep 12 10:16:16.141541 kubelet[2987]: I0912 10:16:16.141504 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:16.141921 kubelet[2987]: E0912 10:16:16.141885 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:16.201062 kubelet[2987]: E0912 10:16:16.200908 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-6349f41dc3&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 10:16:16.229850 containerd[1747]: time="2025-09-12T10:16:16.229788349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-6349f41dc3,Uid:00ebe07ab505ed0a04c75b1c3901f862,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:16.268065 containerd[1747]: time="2025-09-12T10:16:16.268026215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-6349f41dc3,Uid:b3733604dc2beb98e77466ba8bcf80b9,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:16.269914 kubelet[2987]: E0912 10:16:16.269876 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:16:16.687339 kubelet[2987]: E0912 10:16:16.687297 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 10:16:16.772448 kubelet[2987]: E0912 10:16:16.772395 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-6349f41dc3?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="1.6s" Sep 12 10:16:16.944384 kubelet[2987]: I0912 10:16:16.944270 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:16.944744 kubelet[2987]: E0912 10:16:16.944707 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:17.011654 kubelet[2987]: E0912 10:16:17.011607 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 10:16:17.690174 kubelet[2987]: E0912 10:16:17.431819 2987 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 10:16:17.962557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115165608.mount: Deactivated successfully. Sep 12 10:16:17.983104 kubelet[2987]: E0912 10:16:17.983063 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-6349f41dc3&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 10:16:17.989042 containerd[1747]: time="2025-09-12T10:16:17.988991976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:16:18.001151 containerd[1747]: time="2025-09-12T10:16:18.000992085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 12 10:16:18.004067 containerd[1747]: time="2025-09-12T10:16:18.004032838Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:16:18.007359 containerd[1747]: time="2025-09-12T10:16:18.007322196Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:16:18.017856 containerd[1747]: time="2025-09-12T10:16:18.017676676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:16:18.021482 containerd[1747]: time="2025-09-12T10:16:18.021404341Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:16:18.027428 containerd[1747]: time="2025-09-12T10:16:18.027363844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:16:18.030928 containerd[1747]: time="2025-09-12T10:16:18.030882406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:16:18.032247 containerd[1747]: time="2025-09-12T10:16:18.031703320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.110161638s" Sep 12 10:16:18.039366 containerd[1747]: time="2025-09-12T10:16:18.039330953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.771211737s" Sep 12 10:16:18.042051 containerd[1747]: time="2025-09-12T10:16:18.042018300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.812121649s" Sep 12 10:16:18.373047 kubelet[2987]: E0912 10:16:18.373006 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-6349f41dc3?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="3.2s" Sep 12 10:16:18.547702 kubelet[2987]: I0912 10:16:18.547638 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:18.548233 kubelet[2987]: E0912 10:16:18.548200 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:18.677467 kubelet[2987]: E0912 10:16:18.677257 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:16:18.758657 containerd[1747]: time="2025-09-12T10:16:18.754960912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:18.758657 containerd[1747]: time="2025-09-12T10:16:18.758572075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:18.758657 containerd[1747]: time="2025-09-12T10:16:18.758590275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:18.758657 containerd[1747]: time="2025-09-12T10:16:18.758380771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:18.758657 containerd[1747]: time="2025-09-12T10:16:18.758456273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:18.758657 containerd[1747]: time="2025-09-12T10:16:18.758471273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:18.758657 containerd[1747]: time="2025-09-12T10:16:18.758559975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:18.761200 containerd[1747]: time="2025-09-12T10:16:18.759139685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:18.769057 containerd[1747]: time="2025-09-12T10:16:18.768748652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:18.769057 containerd[1747]: time="2025-09-12T10:16:18.768818653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:18.769057 containerd[1747]: time="2025-09-12T10:16:18.768843554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:18.769057 containerd[1747]: time="2025-09-12T10:16:18.768943655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:18.816136 systemd[1]: Started cri-containerd-1f4c63d8b0cc889e53807583fa87fa6e000593fbd886973bbf43fbec2bc0b2ff.scope - libcontainer container 1f4c63d8b0cc889e53807583fa87fa6e000593fbd886973bbf43fbec2bc0b2ff. Sep 12 10:16:18.818878 systemd[1]: Started cri-containerd-59082d79c1b9b9c3b1154619a04fb1d0910d51ce6c366264cc0e7ff09a7b1a51.scope - libcontainer container 59082d79c1b9b9c3b1154619a04fb1d0910d51ce6c366264cc0e7ff09a7b1a51. Sep 12 10:16:18.823364 systemd[1]: Started cri-containerd-dd34de074d3c12f95eec5d8e1e7f8626a5a2943bbae275928cb3cf50b75ac533.scope - libcontainer container dd34de074d3c12f95eec5d8e1e7f8626a5a2943bbae275928cb3cf50b75ac533. Sep 12 10:16:18.896702 containerd[1747]: time="2025-09-12T10:16:18.896613778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-6349f41dc3,Uid:b3733604dc2beb98e77466ba8bcf80b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd34de074d3c12f95eec5d8e1e7f8626a5a2943bbae275928cb3cf50b75ac533\"" Sep 12 10:16:18.906490 containerd[1747]: time="2025-09-12T10:16:18.906439449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-6349f41dc3,Uid:00ebe07ab505ed0a04c75b1c3901f862,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f4c63d8b0cc889e53807583fa87fa6e000593fbd886973bbf43fbec2bc0b2ff\"" Sep 12 10:16:18.919454 containerd[1747]: time="2025-09-12T10:16:18.919416375Z" level=info msg="CreateContainer within sandbox \"dd34de074d3c12f95eec5d8e1e7f8626a5a2943bbae275928cb3cf50b75ac533\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 10:16:18.924356 containerd[1747]: time="2025-09-12T10:16:18.924323760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-6349f41dc3,Uid:c533fb2ab02a94bd7a3a036854ada4f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"59082d79c1b9b9c3b1154619a04fb1d0910d51ce6c366264cc0e7ff09a7b1a51\"" Sep 12 10:16:18.925032 containerd[1747]: time="2025-09-12T10:16:18.925008772Z" level=info msg="CreateContainer within sandbox \"1f4c63d8b0cc889e53807583fa87fa6e000593fbd886973bbf43fbec2bc0b2ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 10:16:18.932381 containerd[1747]: time="2025-09-12T10:16:18.932296799Z" level=info msg="CreateContainer within sandbox \"59082d79c1b9b9c3b1154619a04fb1d0910d51ce6c366264cc0e7ff09a7b1a51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 10:16:18.978166 containerd[1747]: time="2025-09-12T10:16:18.978126197Z" level=info msg="CreateContainer within sandbox \"dd34de074d3c12f95eec5d8e1e7f8626a5a2943bbae275928cb3cf50b75ac533\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d830bf27e5204424fdf6bb14e3ad1927f83a6309c6b368665cb98aa3c5b424f4\"" Sep 12 10:16:18.978977 containerd[1747]: time="2025-09-12T10:16:18.978902711Z" level=info msg="StartContainer for \"d830bf27e5204424fdf6bb14e3ad1927f83a6309c6b368665cb98aa3c5b424f4\"" Sep 12 10:16:18.991493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553899096.mount: Deactivated successfully. Sep 12 10:16:19.012018 containerd[1747]: time="2025-09-12T10:16:19.011913685Z" level=info msg="CreateContainer within sandbox \"1f4c63d8b0cc889e53807583fa87fa6e000593fbd886973bbf43fbec2bc0b2ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"27efd1cfd647adeb4f707b282f01614d3b79168b49a66e937fbce14e58ac3f4b\"" Sep 12 10:16:19.015279 containerd[1747]: time="2025-09-12T10:16:19.012323693Z" level=info msg="StartContainer for \"27efd1cfd647adeb4f707b282f01614d3b79168b49a66e937fbce14e58ac3f4b\"" Sep 12 10:16:19.015157 systemd[1]: Started cri-containerd-d830bf27e5204424fdf6bb14e3ad1927f83a6309c6b368665cb98aa3c5b424f4.scope - libcontainer container d830bf27e5204424fdf6bb14e3ad1927f83a6309c6b368665cb98aa3c5b424f4. Sep 12 10:16:19.021295 containerd[1747]: time="2025-09-12T10:16:19.021169647Z" level=info msg="CreateContainer within sandbox \"59082d79c1b9b9c3b1154619a04fb1d0910d51ce6c366264cc0e7ff09a7b1a51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a51c9db30880cf12b9fa8ea1353532638e2b02346352d021c7c28b4affc3f1d\"" Sep 12 10:16:19.022146 containerd[1747]: time="2025-09-12T10:16:19.022069162Z" level=info msg="StartContainer for \"8a51c9db30880cf12b9fa8ea1353532638e2b02346352d021c7c28b4affc3f1d\"" Sep 12 10:16:19.062157 systemd[1]: Started cri-containerd-27efd1cfd647adeb4f707b282f01614d3b79168b49a66e937fbce14e58ac3f4b.scope - libcontainer container 27efd1cfd647adeb4f707b282f01614d3b79168b49a66e937fbce14e58ac3f4b. Sep 12 10:16:19.070429 systemd[1]: Started cri-containerd-8a51c9db30880cf12b9fa8ea1353532638e2b02346352d021c7c28b4affc3f1d.scope - libcontainer container 8a51c9db30880cf12b9fa8ea1353532638e2b02346352d021c7c28b4affc3f1d. Sep 12 10:16:19.100832 containerd[1747]: time="2025-09-12T10:16:19.100667131Z" level=info msg="StartContainer for \"d830bf27e5204424fdf6bb14e3ad1927f83a6309c6b368665cb98aa3c5b424f4\" returns successfully" Sep 12 10:16:19.147872 containerd[1747]: time="2025-09-12T10:16:19.147830352Z" level=info msg="StartContainer for \"8a51c9db30880cf12b9fa8ea1353532638e2b02346352d021c7c28b4affc3f1d\" returns successfully" Sep 12 10:16:19.178341 containerd[1747]: time="2025-09-12T10:16:19.178294282Z" level=info msg="StartContainer for \"27efd1cfd647adeb4f707b282f01614d3b79168b49a66e937fbce14e58ac3f4b\" returns successfully" Sep 12 10:16:19.459616 kubelet[2987]: E0912 10:16:19.459075 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:19.459616 kubelet[2987]: E0912 10:16:19.459471 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:19.464716 kubelet[2987]: E0912 10:16:19.464449 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:20.469008 kubelet[2987]: E0912 10:16:20.468930 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:20.470305 kubelet[2987]: E0912 10:16:20.470165 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:21.604722 kubelet[2987]: E0912 10:16:21.604671 2987 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-6349f41dc3\" not found" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:21.750530 kubelet[2987]: I0912 10:16:21.750497 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:21.788977 kubelet[2987]: I0912 10:16:21.788569 2987 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:21.788977 kubelet[2987]: E0912 10:16:21.788610 2987 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-n-6349f41dc3\": node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:21.839685 kubelet[2987]: E0912 10:16:21.839634 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:21.941167 kubelet[2987]: E0912 10:16:21.941034 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:22.042158 kubelet[2987]: E0912 10:16:22.042103 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:22.143191 kubelet[2987]: E0912 10:16:22.143144 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:22.244450 kubelet[2987]: E0912 10:16:22.244001 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:22.344633 kubelet[2987]: E0912 10:16:22.344571 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:22.445158 kubelet[2987]: E0912 10:16:22.445108 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:22.545990 kubelet[2987]: E0912 10:16:22.545852 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:22.667031 kubelet[2987]: I0912 10:16:22.666986 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:23.344856 kubelet[2987]: I0912 10:16:23.344793 2987 apiserver.go:52] "Watching apiserver" Sep 12 10:16:23.366484 kubelet[2987]: I0912 10:16:23.366432 2987 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:16:23.659216 kubelet[2987]: I0912 10:16:23.659059 2987 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 10:16:23.662177 kubelet[2987]: I0912 10:16:23.660908 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:23.673619 kubelet[2987]: I0912 10:16:23.672170 2987 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 10:16:23.673619 kubelet[2987]: I0912 10:16:23.672286 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:23.685037 kubelet[2987]: I0912 10:16:23.685004 2987 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 10:16:23.917537 systemd[1]: Reload requested from client PID 3270 ('systemctl') (unit session-9.scope)... Sep 12 10:16:23.917555 systemd[1]: Reloading... Sep 12 10:16:24.030987 zram_generator::config[3313]: No configuration found. Sep 12 10:16:24.179770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:16:24.312818 systemd[1]: Reloading finished in 394 ms. Sep 12 10:16:24.345295 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:24.362306 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:16:24.362623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:24.362701 systemd[1]: kubelet.service: Consumed 1.133s CPU time, 130.6M memory peak. Sep 12 10:16:24.373271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:24.487742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:24.499574 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:16:24.540681 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:16:24.541056 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:16:24.541056 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:16:24.541163 kubelet[3384]: I0912 10:16:24.541129 3384 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:16:24.546495 kubelet[3384]: I0912 10:16:24.546454 3384 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 10:16:24.546495 kubelet[3384]: I0912 10:16:24.546480 3384 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:16:24.546769 kubelet[3384]: I0912 10:16:24.546751 3384 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 10:16:24.548008 kubelet[3384]: I0912 10:16:24.547940 3384 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 10:16:24.550568 kubelet[3384]: I0912 10:16:24.549941 3384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:16:24.556163 kubelet[3384]: E0912 10:16:24.556134 3384 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:16:24.556163 kubelet[3384]: I0912 10:16:24.556162 3384 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:16:24.559975 kubelet[3384]: I0912 10:16:24.559946 3384 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:16:24.560223 kubelet[3384]: I0912 10:16:24.560187 3384 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:16:24.560386 kubelet[3384]: I0912 10:16:24.560219 3384 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-6349f41dc3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:16:24.560503 kubelet[3384]: I0912 10:16:24.560396 3384 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:16:24.560503 kubelet[3384]: I0912 10:16:24.560410 3384 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 10:16:24.560503 kubelet[3384]: I0912 10:16:24.560463 3384 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:16:24.560654 kubelet[3384]: I0912 10:16:24.560641 3384 kubelet.go:480] "Attempting to sync node with API server" Sep 12 10:16:24.562280 kubelet[3384]: I0912 10:16:24.560664 3384 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:16:24.562280 kubelet[3384]: I0912 10:16:24.560692 3384 kubelet.go:386] "Adding apiserver pod source" Sep 12 10:16:24.562280 kubelet[3384]: I0912 10:16:24.560708 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:16:24.562757 kubelet[3384]: I0912 10:16:24.562743 3384 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:16:24.563439 kubelet[3384]: I0912 10:16:24.563419 3384 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 10:16:24.568297 kubelet[3384]: I0912 10:16:24.568283 3384 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:16:24.568426 kubelet[3384]: I0912 10:16:24.568416 3384 server.go:1289] "Started kubelet" Sep 12 10:16:24.570136 kubelet[3384]: I0912 10:16:24.570119 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:16:24.580488 kubelet[3384]: I0912 10:16:24.580452 3384 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:16:24.582514 kubelet[3384]: I0912 10:16:24.581637 3384 server.go:317] "Adding debug handlers to kubelet server" Sep 12 10:16:24.585655 kubelet[3384]: I0912 10:16:24.585605 3384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:16:24.585964 kubelet[3384]: I0912 10:16:24.585941 3384 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:16:24.586341 kubelet[3384]: I0912 10:16:24.586322 3384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:16:24.589426 kubelet[3384]: I0912 10:16:24.589409 3384 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:16:24.589804 kubelet[3384]: E0912 10:16:24.589786 3384 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-6349f41dc3\" not found" Sep 12 10:16:24.592088 kubelet[3384]: I0912 10:16:24.592068 3384 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:16:24.592384 kubelet[3384]: I0912 10:16:24.592367 3384 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:16:24.595307 kubelet[3384]: I0912 10:16:24.595151 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 10:16:24.596825 kubelet[3384]: I0912 10:16:24.596806 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 10:16:24.597246 kubelet[3384]: I0912 10:16:24.596929 3384 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 10:16:24.597246 kubelet[3384]: I0912 10:16:24.596967 3384 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:16:24.597246 kubelet[3384]: I0912 10:16:24.596976 3384 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 10:16:24.597246 kubelet[3384]: E0912 10:16:24.597027 3384 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:16:24.603981 kubelet[3384]: I0912 10:16:24.603314 3384 factory.go:223] Registration of the systemd container factory successfully Sep 12 10:16:24.603981 kubelet[3384]: I0912 10:16:24.603425 3384 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:16:24.610708 kubelet[3384]: I0912 10:16:24.610680 3384 factory.go:223] Registration of the containerd container factory successfully Sep 12 10:16:24.615889 kubelet[3384]: E0912 10:16:24.615366 3384 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:16:24.667733 kubelet[3384]: I0912 10:16:24.667700 3384 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:16:24.667733 kubelet[3384]: I0912 10:16:24.667725 3384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:16:24.667929 kubelet[3384]: I0912 10:16:24.667748 3384 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:16:24.667929 kubelet[3384]: I0912 10:16:24.667893 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 10:16:24.667929 kubelet[3384]: I0912 10:16:24.667906 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 10:16:24.667929 kubelet[3384]: I0912 10:16:24.667927 3384 policy_none.go:49] "None policy: Start" Sep 12 10:16:24.668119 kubelet[3384]: I0912 10:16:24.667940 3384 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:16:24.668119 kubelet[3384]: I0912 10:16:24.667967 3384 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:16:24.668203 kubelet[3384]: I0912 10:16:24.668154 3384 state_mem.go:75] "Updated machine memory state" Sep 12 10:16:24.673053 kubelet[3384]: E0912 10:16:24.672209 3384 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 10:16:24.673053 kubelet[3384]: I0912 10:16:24.672387 3384 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:16:24.673053 kubelet[3384]: I0912 10:16:24.672399 3384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:16:24.673053 kubelet[3384]: I0912 10:16:24.672822 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:16:24.674567 kubelet[3384]: E0912 10:16:24.674533 3384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:16:24.698358 kubelet[3384]: I0912 10:16:24.698320 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.698682 kubelet[3384]: I0912 10:16:24.698320 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.698839 kubelet[3384]: I0912 10:16:24.698825 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.709641 kubelet[3384]: I0912 10:16:24.709585 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 10:16:24.709992 kubelet[3384]: E0912 10:16:24.709898 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.710684 kubelet[3384]: I0912 10:16:24.710663 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 10:16:24.710819 kubelet[3384]: E0912 10:16:24.710738 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-6349f41dc3\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.710819 kubelet[3384]: I0912 10:16:24.710663 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 10:16:24.710981 kubelet[3384]: E0912 10:16:24.710892 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.776262 kubelet[3384]: I0912 10:16:24.776101 3384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.786522 kubelet[3384]: I0912 10:16:24.786476 3384 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.786676 kubelet[3384]: I0912 10:16:24.786599 3384 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896032 kubelet[3384]: I0912 10:16:24.895974 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c533fb2ab02a94bd7a3a036854ada4f3-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" (UID: \"c533fb2ab02a94bd7a3a036854ada4f3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896032 kubelet[3384]: I0912 10:16:24.896042 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c533fb2ab02a94bd7a3a036854ada4f3-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" (UID: \"c533fb2ab02a94bd7a3a036854ada4f3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896357 kubelet[3384]: I0912 10:16:24.896072 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c533fb2ab02a94bd7a3a036854ada4f3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" (UID: \"c533fb2ab02a94bd7a3a036854ada4f3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896357 kubelet[3384]: I0912 10:16:24.896098 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896357 kubelet[3384]: I0912 10:16:24.896123 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896357 kubelet[3384]: I0912 10:16:24.896143 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3733604dc2beb98e77466ba8bcf80b9-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-6349f41dc3\" (UID: \"b3733604dc2beb98e77466ba8bcf80b9\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896357 kubelet[3384]: I0912 10:16:24.896162 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896490 kubelet[3384]: I0912 10:16:24.896185 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:24.896490 kubelet[3384]: I0912 10:16:24.896208 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00ebe07ab505ed0a04c75b1c3901f862-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-6349f41dc3\" (UID: \"00ebe07ab505ed0a04c75b1c3901f862\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:25.232527 sudo[3422]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 10:16:25.232900 sudo[3422]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 10:16:25.563198 kubelet[3384]: I0912 10:16:25.562100 3384 apiserver.go:52] "Watching apiserver" Sep 12 10:16:25.593974 kubelet[3384]: I0912 10:16:25.593110 3384 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:16:25.646974 kubelet[3384]: I0912 10:16:25.646848 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:25.659022 kubelet[3384]: I0912 10:16:25.658989 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 10:16:25.659203 kubelet[3384]: E0912 10:16:25.659057 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-6349f41dc3\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" Sep 12 10:16:25.693687 kubelet[3384]: I0912 10:16:25.693606 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-6349f41dc3" podStartSLOduration=2.693583599 podStartE2EDuration="2.693583599s" podCreationTimestamp="2025-09-12 10:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:16:25.679017773 +0000 UTC m=+1.174231206" watchObservedRunningTime="2025-09-12 10:16:25.693583599 +0000 UTC m=+1.188797032" Sep 12 10:16:25.712971 kubelet[3384]: I0912 10:16:25.710441 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-6349f41dc3" podStartSLOduration=2.71042306 podStartE2EDuration="2.71042306s" podCreationTimestamp="2025-09-12 10:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:16:25.694499413 +0000 UTC m=+1.189712846" watchObservedRunningTime="2025-09-12 10:16:25.71042306 +0000 UTC m=+1.205636593" Sep 12 10:16:25.781081 sudo[3422]: pam_unix(sudo:session): session closed for user root Sep 12 10:16:27.278738 sudo[2293]: pam_unix(sudo:session): session closed for user root Sep 12 10:16:27.378973 sshd[2292]: Connection closed by 10.200.16.10 port 41334 Sep 12 10:16:27.379678 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Sep 12 10:16:27.382873 systemd[1]: sshd@6-10.200.8.13:22-10.200.16.10:41334.service: Deactivated successfully. Sep 12 10:16:27.385394 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 10:16:27.385624 systemd[1]: session-9.scope: Consumed 5.609s CPU time, 265.9M memory peak. Sep 12 10:16:27.389597 systemd-logind[1727]: Session 9 logged out. Waiting for processes to exit. Sep 12 10:16:27.390842 systemd-logind[1727]: Removed session 9. Sep 12 10:16:29.547740 kubelet[3384]: I0912 10:16:29.547706 3384 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 10:16:29.548349 kubelet[3384]: I0912 10:16:29.548289 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 10:16:29.548441 containerd[1747]: time="2025-09-12T10:16:29.548089056Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 10:16:30.439302 kubelet[3384]: I0912 10:16:30.439237 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-6349f41dc3" podStartSLOduration=7.439195677 podStartE2EDuration="7.439195677s" podCreationTimestamp="2025-09-12 10:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:16:25.710685364 +0000 UTC m=+1.205898897" watchObservedRunningTime="2025-09-12 10:16:30.439195677 +0000 UTC m=+5.934409210" Sep 12 10:16:30.614524 systemd[1]: Created slice kubepods-besteffort-pode4fbc298_2549_4efe_9c86_d2c1eb17c1ea.slice - libcontainer container kubepods-besteffort-pode4fbc298_2549_4efe_9c86_d2c1eb17c1ea.slice. Sep 12 10:16:30.627361 systemd[1]: Created slice kubepods-burstable-pode473bc90_8bce_4e54_b4a6_49d1df19f643.slice - libcontainer container kubepods-burstable-pode473bc90_8bce_4e54_b4a6_49d1df19f643.slice. Sep 12 10:16:30.631733 kubelet[3384]: I0912 10:16:30.631697 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4fbc298-2549-4efe-9c86-d2c1eb17c1ea-lib-modules\") pod \"kube-proxy-vrtjm\" (UID: \"e4fbc298-2549-4efe-9c86-d2c1eb17c1ea\") " pod="kube-system/kube-proxy-vrtjm" Sep 12 10:16:30.632148 kubelet[3384]: I0912 10:16:30.631744 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cni-path\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632148 kubelet[3384]: I0912 10:16:30.631772 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-net\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632148 kubelet[3384]: I0912 10:16:30.631793 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-kernel\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632148 kubelet[3384]: I0912 10:16:30.631820 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-bpf-maps\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632148 kubelet[3384]: I0912 10:16:30.631841 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-lib-modules\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632148 kubelet[3384]: I0912 10:16:30.631867 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-hubble-tls\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632656 kubelet[3384]: I0912 10:16:30.631891 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9bx\" (UniqueName: \"kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-kube-api-access-dl9bx\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632656 kubelet[3384]: I0912 10:16:30.631919 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd4k\" (UniqueName: \"kubernetes.io/projected/e4fbc298-2549-4efe-9c86-d2c1eb17c1ea-kube-api-access-cbd4k\") pod \"kube-proxy-vrtjm\" (UID: \"e4fbc298-2549-4efe-9c86-d2c1eb17c1ea\") " pod="kube-system/kube-proxy-vrtjm" Sep 12 10:16:30.632656 kubelet[3384]: I0912 10:16:30.631942 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-hostproc\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632656 kubelet[3384]: I0912 10:16:30.632013 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-cgroup\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632656 kubelet[3384]: I0912 10:16:30.632035 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-etc-cni-netd\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632656 kubelet[3384]: I0912 10:16:30.632059 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-xtables-lock\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632896 kubelet[3384]: I0912 10:16:30.632080 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4fbc298-2549-4efe-9c86-d2c1eb17c1ea-kube-proxy\") pod \"kube-proxy-vrtjm\" (UID: \"e4fbc298-2549-4efe-9c86-d2c1eb17c1ea\") " pod="kube-system/kube-proxy-vrtjm" Sep 12 10:16:30.632896 kubelet[3384]: I0912 10:16:30.632103 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-run\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632896 kubelet[3384]: I0912 10:16:30.632124 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e473bc90-8bce-4e54-b4a6-49d1df19f643-clustermesh-secrets\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632896 kubelet[3384]: I0912 10:16:30.632145 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-config-path\") pod \"cilium-tbl6m\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " pod="kube-system/cilium-tbl6m" Sep 12 10:16:30.632896 kubelet[3384]: I0912 10:16:30.632172 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4fbc298-2549-4efe-9c86-d2c1eb17c1ea-xtables-lock\") pod \"kube-proxy-vrtjm\" (UID: \"e4fbc298-2549-4efe-9c86-d2c1eb17c1ea\") " pod="kube-system/kube-proxy-vrtjm" Sep 12 10:16:30.777280 systemd[1]: Created slice kubepods-besteffort-pod27be7af7_0b86_421b_a91f_75e015caf6fb.slice - libcontainer container kubepods-besteffort-pod27be7af7_0b86_421b_a91f_75e015caf6fb.slice. Sep 12 10:16:30.834539 kubelet[3384]: I0912 10:16:30.834490 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrbm\" (UniqueName: \"kubernetes.io/projected/27be7af7-0b86-421b-a91f-75e015caf6fb-kube-api-access-jxrbm\") pod \"cilium-operator-6c4d7847fc-vwbtv\" (UID: \"27be7af7-0b86-421b-a91f-75e015caf6fb\") " pod="kube-system/cilium-operator-6c4d7847fc-vwbtv" Sep 12 10:16:30.834712 kubelet[3384]: I0912 10:16:30.834540 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27be7af7-0b86-421b-a91f-75e015caf6fb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vwbtv\" (UID: \"27be7af7-0b86-421b-a91f-75e015caf6fb\") " pod="kube-system/cilium-operator-6c4d7847fc-vwbtv" Sep 12 10:16:30.925081 containerd[1747]: time="2025-09-12T10:16:30.925036020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrtjm,Uid:e4fbc298-2549-4efe-9c86-d2c1eb17c1ea,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:30.932923 containerd[1747]: time="2025-09-12T10:16:30.932185831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbl6m,Uid:e473bc90-8bce-4e54-b4a6-49d1df19f643,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:31.012499 containerd[1747]: time="2025-09-12T10:16:31.012215573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:31.012499 containerd[1747]: time="2025-09-12T10:16:31.012280874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:31.012499 containerd[1747]: time="2025-09-12T10:16:31.012300474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:31.012499 containerd[1747]: time="2025-09-12T10:16:31.012388976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:31.047131 containerd[1747]: time="2025-09-12T10:16:31.044583576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:31.047131 containerd[1747]: time="2025-09-12T10:16:31.046639608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:31.047131 containerd[1747]: time="2025-09-12T10:16:31.046989913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:31.048450 containerd[1747]: time="2025-09-12T10:16:31.047500221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:31.054204 systemd[1]: Started cri-containerd-eb1010b9f554c2768322b10b5df77330756b3546fcc7c0eaf221a9c26ee909a7.scope - libcontainer container eb1010b9f554c2768322b10b5df77330756b3546fcc7c0eaf221a9c26ee909a7. Sep 12 10:16:31.080146 systemd[1]: Started cri-containerd-067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5.scope - libcontainer container 067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5. Sep 12 10:16:31.088143 containerd[1747]: time="2025-09-12T10:16:31.087526342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vwbtv,Uid:27be7af7-0b86-421b-a91f-75e015caf6fb,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:31.119174 containerd[1747]: time="2025-09-12T10:16:31.119136133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrtjm,Uid:e4fbc298-2549-4efe-9c86-d2c1eb17c1ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb1010b9f554c2768322b10b5df77330756b3546fcc7c0eaf221a9c26ee909a7\"" Sep 12 10:16:31.119597 containerd[1747]: time="2025-09-12T10:16:31.119553940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbl6m,Uid:e473bc90-8bce-4e54-b4a6-49d1df19f643,Namespace:kube-system,Attempt:0,} returns sandbox id \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\"" Sep 12 10:16:31.125030 containerd[1747]: time="2025-09-12T10:16:31.125005124Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 10:16:31.129974 containerd[1747]: time="2025-09-12T10:16:31.129923901Z" level=info msg="CreateContainer within sandbox \"eb1010b9f554c2768322b10b5df77330756b3546fcc7c0eaf221a9c26ee909a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 10:16:31.144854 containerd[1747]: time="2025-09-12T10:16:31.144701730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:31.144854 containerd[1747]: time="2025-09-12T10:16:31.144760131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:31.144854 containerd[1747]: time="2025-09-12T10:16:31.144780831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:31.145277 containerd[1747]: time="2025-09-12T10:16:31.145050036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:31.166123 systemd[1]: Started cri-containerd-36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630.scope - libcontainer container 36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630. Sep 12 10:16:31.186182 containerd[1747]: time="2025-09-12T10:16:31.186137473Z" level=info msg="CreateContainer within sandbox \"eb1010b9f554c2768322b10b5df77330756b3546fcc7c0eaf221a9c26ee909a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1f76a2c9fb855f0d9930375a00d91d944423f09b3d2c0fab0a45fc98039f014\"" Sep 12 10:16:31.186859 containerd[1747]: time="2025-09-12T10:16:31.186824384Z" level=info msg="StartContainer for \"f1f76a2c9fb855f0d9930375a00d91d944423f09b3d2c0fab0a45fc98039f014\"" Sep 12 10:16:31.223158 systemd[1]: Started cri-containerd-f1f76a2c9fb855f0d9930375a00d91d944423f09b3d2c0fab0a45fc98039f014.scope - libcontainer container f1f76a2c9fb855f0d9930375a00d91d944423f09b3d2c0fab0a45fc98039f014. Sep 12 10:16:31.228026 containerd[1747]: time="2025-09-12T10:16:31.227651318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vwbtv,Uid:27be7af7-0b86-421b-a91f-75e015caf6fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630\"" Sep 12 10:16:31.270938 containerd[1747]: time="2025-09-12T10:16:31.270883589Z" level=info msg="StartContainer for \"f1f76a2c9fb855f0d9930375a00d91d944423f09b3d2c0fab0a45fc98039f014\" returns successfully" Sep 12 10:16:33.709850 kubelet[3384]: I0912 10:16:33.709628 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vrtjm" podStartSLOduration=3.709606952 podStartE2EDuration="3.709606952s" podCreationTimestamp="2025-09-12 10:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:16:31.680840954 +0000 UTC m=+7.176054387" watchObservedRunningTime="2025-09-12 10:16:33.709606952 +0000 UTC m=+9.204820385" Sep 12 10:16:37.548569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515032295.mount: Deactivated successfully. Sep 12 10:16:39.782083 containerd[1747]: time="2025-09-12T10:16:39.782020930Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:39.784539 containerd[1747]: time="2025-09-12T10:16:39.784381466Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 10:16:39.787663 containerd[1747]: time="2025-09-12T10:16:39.787398211Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:39.788894 containerd[1747]: time="2025-09-12T10:16:39.788858033Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.663714207s" Sep 12 10:16:39.788995 containerd[1747]: time="2025-09-12T10:16:39.788898834Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 10:16:39.791031 containerd[1747]: time="2025-09-12T10:16:39.790999865Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 10:16:39.797297 containerd[1747]: time="2025-09-12T10:16:39.797269059Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:16:39.822495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038305282.mount: Deactivated successfully. Sep 12 10:16:39.835292 containerd[1747]: time="2025-09-12T10:16:39.835173829Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\"" Sep 12 10:16:39.836728 containerd[1747]: time="2025-09-12T10:16:39.836690751Z" level=info msg="StartContainer for \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\"" Sep 12 10:16:39.879121 systemd[1]: Started cri-containerd-e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482.scope - libcontainer container e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482. Sep 12 10:16:39.909728 containerd[1747]: time="2025-09-12T10:16:39.909685148Z" level=info msg="StartContainer for \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\" returns successfully" Sep 12 10:16:39.921095 systemd[1]: cri-containerd-e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482.scope: Deactivated successfully. Sep 12 10:16:40.816837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482-rootfs.mount: Deactivated successfully. Sep 12 10:16:43.666419 containerd[1747]: time="2025-09-12T10:16:43.666351964Z" level=info msg="shim disconnected" id=e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482 namespace=k8s.io Sep 12 10:16:43.666419 containerd[1747]: time="2025-09-12T10:16:43.666407665Z" level=warning msg="cleaning up after shim disconnected" id=e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482 namespace=k8s.io Sep 12 10:16:43.666419 containerd[1747]: time="2025-09-12T10:16:43.666421665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:16:43.699249 containerd[1747]: time="2025-09-12T10:16:43.699068755Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:16:43.790764 containerd[1747]: time="2025-09-12T10:16:43.790711932Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\"" Sep 12 10:16:43.791390 containerd[1747]: time="2025-09-12T10:16:43.791359741Z" level=info msg="StartContainer for \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\"" Sep 12 10:16:43.824112 systemd[1]: Started cri-containerd-2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024.scope - libcontainer container 2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024. Sep 12 10:16:43.864597 containerd[1747]: time="2025-09-12T10:16:43.864555041Z" level=info msg="StartContainer for \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\" returns successfully" Sep 12 10:16:43.869876 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:16:43.870861 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:16:43.871085 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:16:43.883148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:16:43.886353 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:16:43.888435 systemd[1]: cri-containerd-2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024.scope: Deactivated successfully. Sep 12 10:16:43.918480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:16:43.953944 containerd[1747]: time="2025-09-12T10:16:43.953879682Z" level=info msg="shim disconnected" id=2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024 namespace=k8s.io Sep 12 10:16:43.955063 containerd[1747]: time="2025-09-12T10:16:43.954837396Z" level=warning msg="cleaning up after shim disconnected" id=2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024 namespace=k8s.io Sep 12 10:16:43.955063 containerd[1747]: time="2025-09-12T10:16:43.954864797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:16:44.705795 containerd[1747]: time="2025-09-12T10:16:44.705613571Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:16:44.765855 containerd[1747]: time="2025-09-12T10:16:44.765514571Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\"" Sep 12 10:16:44.767300 containerd[1747]: time="2025-09-12T10:16:44.767270097Z" level=info msg="StartContainer for \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\"" Sep 12 10:16:44.779508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024-rootfs.mount: Deactivated successfully. Sep 12 10:16:44.828151 systemd[1]: Started cri-containerd-57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170.scope - libcontainer container 57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170. Sep 12 10:16:44.886537 systemd[1]: cri-containerd-57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170.scope: Deactivated successfully. Sep 12 10:16:44.897539 containerd[1747]: time="2025-09-12T10:16:44.896877044Z" level=info msg="StartContainer for \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\" returns successfully" Sep 12 10:16:44.921477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170-rootfs.mount: Deactivated successfully. Sep 12 10:16:45.358669 containerd[1747]: time="2025-09-12T10:16:45.358601478Z" level=info msg="shim disconnected" id=57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170 namespace=k8s.io Sep 12 10:16:45.358669 containerd[1747]: time="2025-09-12T10:16:45.358662279Z" level=warning msg="cleaning up after shim disconnected" id=57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170 namespace=k8s.io Sep 12 10:16:45.358669 containerd[1747]: time="2025-09-12T10:16:45.358673179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:16:45.401802 containerd[1747]: time="2025-09-12T10:16:45.401734725Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:45.405492 containerd[1747]: time="2025-09-12T10:16:45.405443381Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 10:16:45.409931 containerd[1747]: time="2025-09-12T10:16:45.409873648Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:16:45.411458 containerd[1747]: time="2025-09-12T10:16:45.411333570Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.620294003s" Sep 12 10:16:45.411458 containerd[1747]: time="2025-09-12T10:16:45.411373070Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 10:16:45.420978 containerd[1747]: time="2025-09-12T10:16:45.420922314Z" level=info msg="CreateContainer within sandbox \"36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 10:16:45.454603 containerd[1747]: time="2025-09-12T10:16:45.454550319Z" level=info msg="CreateContainer within sandbox \"36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\"" Sep 12 10:16:45.456789 containerd[1747]: time="2025-09-12T10:16:45.455226629Z" level=info msg="StartContainer for \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\"" Sep 12 10:16:45.485172 systemd[1]: Started cri-containerd-0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460.scope - libcontainer container 0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460. Sep 12 10:16:45.518218 containerd[1747]: time="2025-09-12T10:16:45.518091573Z" level=info msg="StartContainer for \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\" returns successfully" Sep 12 10:16:45.713250 containerd[1747]: time="2025-09-12T10:16:45.713122002Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:16:45.775984 containerd[1747]: time="2025-09-12T10:16:45.775267035Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\"" Sep 12 10:16:45.779576 containerd[1747]: time="2025-09-12T10:16:45.779539599Z" level=info msg="StartContainer for \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\"" Sep 12 10:16:45.847200 systemd[1]: Started cri-containerd-43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6.scope - libcontainer container 43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6. Sep 12 10:16:45.905547 systemd[1]: cri-containerd-43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6.scope: Deactivated successfully. Sep 12 10:16:45.907592 containerd[1747]: time="2025-09-12T10:16:45.907380219Z" level=info msg="StartContainer for \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\" returns successfully" Sep 12 10:16:45.912805 kubelet[3384]: I0912 10:16:45.912001 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vwbtv" podStartSLOduration=1.7292725519999999 podStartE2EDuration="15.911980688s" podCreationTimestamp="2025-09-12 10:16:30 +0000 UTC" firstStartedPulling="2025-09-12 10:16:31.229796351 +0000 UTC m=+6.725009784" lastFinishedPulling="2025-09-12 10:16:45.412504487 +0000 UTC m=+20.907717920" observedRunningTime="2025-09-12 10:16:45.773505709 +0000 UTC m=+21.268719242" watchObservedRunningTime="2025-09-12 10:16:45.911980688 +0000 UTC m=+21.407194121" Sep 12 10:16:45.941049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6-rootfs.mount: Deactivated successfully. Sep 12 10:16:45.958026 containerd[1747]: time="2025-09-12T10:16:45.957943678Z" level=info msg="shim disconnected" id=43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6 namespace=k8s.io Sep 12 10:16:45.958224 containerd[1747]: time="2025-09-12T10:16:45.958068880Z" level=warning msg="cleaning up after shim disconnected" id=43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6 namespace=k8s.io Sep 12 10:16:45.958224 containerd[1747]: time="2025-09-12T10:16:45.958090181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:16:46.720630 containerd[1747]: time="2025-09-12T10:16:46.720575364Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:16:46.767522 containerd[1747]: time="2025-09-12T10:16:46.767472725Z" level=info msg="CreateContainer within sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\"" Sep 12 10:16:46.769050 containerd[1747]: time="2025-09-12T10:16:46.768055533Z" level=info msg="StartContainer for \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\"" Sep 12 10:16:46.811134 systemd[1]: Started cri-containerd-99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc.scope - libcontainer container 99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc. Sep 12 10:16:46.845133 containerd[1747]: time="2025-09-12T10:16:46.844510310Z" level=info msg="StartContainer for \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\" returns successfully" Sep 12 10:16:46.971808 kubelet[3384]: I0912 10:16:46.970780 3384 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 10:16:47.044567 systemd[1]: Created slice kubepods-burstable-pod01013f62_3cc1_47cf_98d2_a13d1bd575c5.slice - libcontainer container kubepods-burstable-pod01013f62_3cc1_47cf_98d2_a13d1bd575c5.slice. Sep 12 10:16:47.052416 kubelet[3384]: I0912 10:16:47.050512 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsvjt\" (UniqueName: \"kubernetes.io/projected/01013f62-3cc1-47cf-98d2-a13d1bd575c5-kube-api-access-jsvjt\") pod \"coredns-674b8bbfcf-bsbxp\" (UID: \"01013f62-3cc1-47cf-98d2-a13d1bd575c5\") " pod="kube-system/coredns-674b8bbfcf-bsbxp" Sep 12 10:16:47.052416 kubelet[3384]: I0912 10:16:47.050563 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5d8\" (UniqueName: \"kubernetes.io/projected/0ff64da6-78b8-40f2-9e37-d5f9f259769a-kube-api-access-cg5d8\") pod \"coredns-674b8bbfcf-cr4rd\" (UID: \"0ff64da6-78b8-40f2-9e37-d5f9f259769a\") " pod="kube-system/coredns-674b8bbfcf-cr4rd" Sep 12 10:16:47.052416 kubelet[3384]: I0912 10:16:47.050595 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01013f62-3cc1-47cf-98d2-a13d1bd575c5-config-volume\") pod \"coredns-674b8bbfcf-bsbxp\" (UID: \"01013f62-3cc1-47cf-98d2-a13d1bd575c5\") " pod="kube-system/coredns-674b8bbfcf-bsbxp" Sep 12 10:16:47.052416 kubelet[3384]: I0912 10:16:47.050617 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ff64da6-78b8-40f2-9e37-d5f9f259769a-config-volume\") pod \"coredns-674b8bbfcf-cr4rd\" (UID: \"0ff64da6-78b8-40f2-9e37-d5f9f259769a\") " pod="kube-system/coredns-674b8bbfcf-cr4rd" Sep 12 10:16:47.055698 systemd[1]: Created slice kubepods-burstable-pod0ff64da6_78b8_40f2_9e37_d5f9f259769a.slice - libcontainer container kubepods-burstable-pod0ff64da6_78b8_40f2_9e37_d5f9f259769a.slice. Sep 12 10:16:47.352205 containerd[1747]: time="2025-09-12T10:16:47.351781557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bsbxp,Uid:01013f62-3cc1-47cf-98d2-a13d1bd575c5,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:47.363033 containerd[1747]: time="2025-09-12T10:16:47.362987515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cr4rd,Uid:0ff64da6-78b8-40f2-9e37-d5f9f259769a,Namespace:kube-system,Attempt:0,}" Sep 12 10:16:47.745588 kubelet[3384]: I0912 10:16:47.745192 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tbl6m" podStartSLOduration=9.077520633 podStartE2EDuration="17.7451744s" podCreationTimestamp="2025-09-12 10:16:30 +0000 UTC" firstStartedPulling="2025-09-12 10:16:31.122336183 +0000 UTC m=+6.617549616" lastFinishedPulling="2025-09-12 10:16:39.78998995 +0000 UTC m=+15.285203383" observedRunningTime="2025-09-12 10:16:47.745129399 +0000 UTC m=+23.240342932" watchObservedRunningTime="2025-09-12 10:16:47.7451744 +0000 UTC m=+23.240387833" Sep 12 10:16:49.349407 systemd-networkd[1580]: cilium_host: Link UP Sep 12 10:16:49.349591 systemd-networkd[1580]: cilium_net: Link UP Sep 12 10:16:49.349597 systemd-networkd[1580]: cilium_net: Gained carrier Sep 12 10:16:49.349787 systemd-networkd[1580]: cilium_host: Gained carrier Sep 12 10:16:49.353051 systemd-networkd[1580]: cilium_host: Gained IPv6LL Sep 12 10:16:49.560128 systemd-networkd[1580]: cilium_vxlan: Link UP Sep 12 10:16:49.560138 systemd-networkd[1580]: cilium_vxlan: Gained carrier Sep 12 10:16:49.811033 kernel: NET: Registered PF_ALG protocol family Sep 12 10:16:49.933101 systemd-networkd[1580]: cilium_net: Gained IPv6LL Sep 12 10:16:50.613248 systemd-networkd[1580]: lxc_health: Link UP Sep 12 10:16:50.613562 systemd-networkd[1580]: lxc_health: Gained carrier Sep 12 10:16:50.764112 systemd-networkd[1580]: cilium_vxlan: Gained IPv6LL Sep 12 10:16:50.954468 kernel: eth0: renamed from tmpb9862 Sep 12 10:16:50.947033 systemd-networkd[1580]: lxc0393417fd0bb: Link UP Sep 12 10:16:50.969567 systemd-networkd[1580]: lxc0393417fd0bb: Gained carrier Sep 12 10:16:50.994892 systemd-networkd[1580]: lxc6b0c53e709b0: Link UP Sep 12 10:16:50.998007 kernel: eth0: renamed from tmp3bac6 Sep 12 10:16:51.011429 systemd-networkd[1580]: lxc6b0c53e709b0: Gained carrier Sep 12 10:16:51.660168 systemd-networkd[1580]: lxc_health: Gained IPv6LL Sep 12 10:16:52.748162 systemd-networkd[1580]: lxc0393417fd0bb: Gained IPv6LL Sep 12 10:16:52.876120 systemd-networkd[1580]: lxc6b0c53e709b0: Gained IPv6LL Sep 12 10:16:54.825754 containerd[1747]: time="2025-09-12T10:16:54.825622861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:54.827030 containerd[1747]: time="2025-09-12T10:16:54.825763063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:54.827030 containerd[1747]: time="2025-09-12T10:16:54.825802264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:54.827030 containerd[1747]: time="2025-09-12T10:16:54.825918066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:54.869741 containerd[1747]: time="2025-09-12T10:16:54.867176389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:16:54.869741 containerd[1747]: time="2025-09-12T10:16:54.867307991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:16:54.869741 containerd[1747]: time="2025-09-12T10:16:54.867372192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:54.869741 containerd[1747]: time="2025-09-12T10:16:54.867502294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:16:54.897581 systemd[1]: Started cri-containerd-b9862cd0febfbc455a2c62acc5c528be10486b8e6cf1dce6469bf2b86e7d5023.scope - libcontainer container b9862cd0febfbc455a2c62acc5c528be10486b8e6cf1dce6469bf2b86e7d5023. Sep 12 10:16:54.922729 systemd[1]: run-containerd-runc-k8s.io-3bac67aa347a3d59366a17a9d3a1f045293e27d78057d391382734039f010c55-runc.BbXPyz.mount: Deactivated successfully. Sep 12 10:16:54.939185 systemd[1]: Started cri-containerd-3bac67aa347a3d59366a17a9d3a1f045293e27d78057d391382734039f010c55.scope - libcontainer container 3bac67aa347a3d59366a17a9d3a1f045293e27d78057d391382734039f010c55. Sep 12 10:16:54.993616 containerd[1747]: time="2025-09-12T10:16:54.993562398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bsbxp,Uid:01013f62-3cc1-47cf-98d2-a13d1bd575c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9862cd0febfbc455a2c62acc5c528be10486b8e6cf1dce6469bf2b86e7d5023\"" Sep 12 10:16:55.005278 containerd[1747]: time="2025-09-12T10:16:55.005227074Z" level=info msg="CreateContainer within sandbox \"b9862cd0febfbc455a2c62acc5c528be10486b8e6cf1dce6469bf2b86e7d5023\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:16:55.042608 containerd[1747]: time="2025-09-12T10:16:55.042556438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cr4rd,Uid:0ff64da6-78b8-40f2-9e37-d5f9f259769a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bac67aa347a3d59366a17a9d3a1f045293e27d78057d391382734039f010c55\"" Sep 12 10:16:55.046671 containerd[1747]: time="2025-09-12T10:16:55.046452497Z" level=info msg="CreateContainer within sandbox \"b9862cd0febfbc455a2c62acc5c528be10486b8e6cf1dce6469bf2b86e7d5023\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"857c030e9cb21dc95890e7068032da399c5033bb52b7e6cd2228e309141241eb\"" Sep 12 10:16:55.047287 containerd[1747]: time="2025-09-12T10:16:55.047221109Z" level=info msg="StartContainer for \"857c030e9cb21dc95890e7068032da399c5033bb52b7e6cd2228e309141241eb\"" Sep 12 10:16:55.056842 containerd[1747]: time="2025-09-12T10:16:55.056144844Z" level=info msg="CreateContainer within sandbox \"3bac67aa347a3d59366a17a9d3a1f045293e27d78057d391382734039f010c55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:16:55.086162 systemd[1]: Started cri-containerd-857c030e9cb21dc95890e7068032da399c5033bb52b7e6cd2228e309141241eb.scope - libcontainer container 857c030e9cb21dc95890e7068032da399c5033bb52b7e6cd2228e309141241eb. Sep 12 10:16:55.097275 containerd[1747]: time="2025-09-12T10:16:55.097234564Z" level=info msg="CreateContainer within sandbox \"3bac67aa347a3d59366a17a9d3a1f045293e27d78057d391382734039f010c55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9c4853909a3acd6811fc8df24b57219733044902ff34b9401026dc4158ddebb\"" Sep 12 10:16:55.100166 containerd[1747]: time="2025-09-12T10:16:55.100133808Z" level=info msg="StartContainer for \"d9c4853909a3acd6811fc8df24b57219733044902ff34b9401026dc4158ddebb\"" Sep 12 10:16:55.141697 containerd[1747]: time="2025-09-12T10:16:55.141608035Z" level=info msg="StartContainer for \"857c030e9cb21dc95890e7068032da399c5033bb52b7e6cd2228e309141241eb\" returns successfully" Sep 12 10:16:55.144890 systemd[1]: Started cri-containerd-d9c4853909a3acd6811fc8df24b57219733044902ff34b9401026dc4158ddebb.scope - libcontainer container d9c4853909a3acd6811fc8df24b57219733044902ff34b9401026dc4158ddebb. Sep 12 10:16:55.187945 containerd[1747]: time="2025-09-12T10:16:55.187851133Z" level=info msg="StartContainer for \"d9c4853909a3acd6811fc8df24b57219733044902ff34b9401026dc4158ddebb\" returns successfully" Sep 12 10:16:55.754719 kubelet[3384]: I0912 10:16:55.754639 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cr4rd" podStartSLOduration=25.754584095 podStartE2EDuration="25.754584095s" podCreationTimestamp="2025-09-12 10:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:16:55.753282675 +0000 UTC m=+31.248496108" watchObservedRunningTime="2025-09-12 10:16:55.754584095 +0000 UTC m=+31.249797628" Sep 12 10:16:55.771017 kubelet[3384]: I0912 10:16:55.770604 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bsbxp" podStartSLOduration=25.770583237 podStartE2EDuration="25.770583237s" podCreationTimestamp="2025-09-12 10:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:16:55.769199316 +0000 UTC m=+31.264412749" watchObservedRunningTime="2025-09-12 10:16:55.770583237 +0000 UTC m=+31.265796670" Sep 12 10:18:03.647480 systemd[1]: Started sshd@7-10.200.8.13:22-10.200.16.10:39034.service - OpenSSH per-connection server daemon (10.200.16.10:39034). Sep 12 10:18:04.274174 sshd[4785]: Accepted publickey for core from 10.200.16.10 port 39034 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:04.275743 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:04.280148 systemd-logind[1727]: New session 10 of user core. Sep 12 10:18:04.291150 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 10:18:04.796326 sshd[4787]: Connection closed by 10.200.16.10 port 39034 Sep 12 10:18:04.797125 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:04.800886 systemd[1]: sshd@7-10.200.8.13:22-10.200.16.10:39034.service: Deactivated successfully. Sep 12 10:18:04.803331 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 10:18:04.805369 systemd-logind[1727]: Session 10 logged out. Waiting for processes to exit. Sep 12 10:18:04.806507 systemd-logind[1727]: Removed session 10. Sep 12 10:18:09.914302 systemd[1]: Started sshd@8-10.200.8.13:22-10.200.16.10:58872.service - OpenSSH per-connection server daemon (10.200.16.10:58872). Sep 12 10:18:10.536426 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 58872 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:10.537920 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:10.542663 systemd-logind[1727]: New session 11 of user core. Sep 12 10:18:10.548134 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 10:18:11.271025 sshd[4802]: Connection closed by 10.200.16.10 port 58872 Sep 12 10:18:11.271759 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:11.275611 systemd[1]: sshd@8-10.200.8.13:22-10.200.16.10:58872.service: Deactivated successfully. Sep 12 10:18:11.277933 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 10:18:11.279083 systemd-logind[1727]: Session 11 logged out. Waiting for processes to exit. Sep 12 10:18:11.280487 systemd-logind[1727]: Removed session 11. Sep 12 10:18:16.392305 systemd[1]: Started sshd@9-10.200.8.13:22-10.200.16.10:58888.service - OpenSSH per-connection server daemon (10.200.16.10:58888). Sep 12 10:18:17.018468 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 58888 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:17.019915 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:17.024347 systemd-logind[1727]: New session 12 of user core. Sep 12 10:18:17.033143 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 10:18:17.517506 sshd[4817]: Connection closed by 10.200.16.10 port 58888 Sep 12 10:18:17.519180 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:17.522908 systemd[1]: sshd@9-10.200.8.13:22-10.200.16.10:58888.service: Deactivated successfully. Sep 12 10:18:17.525176 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 10:18:17.526316 systemd-logind[1727]: Session 12 logged out. Waiting for processes to exit. Sep 12 10:18:17.527312 systemd-logind[1727]: Removed session 12. Sep 12 10:18:22.636328 systemd[1]: Started sshd@10-10.200.8.13:22-10.200.16.10:56192.service - OpenSSH per-connection server daemon (10.200.16.10:56192). Sep 12 10:18:23.259087 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 56192 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:23.260606 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:23.265312 systemd-logind[1727]: New session 13 of user core. Sep 12 10:18:23.268135 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 10:18:23.777366 sshd[4833]: Connection closed by 10.200.16.10 port 56192 Sep 12 10:18:23.778252 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:23.782507 systemd[1]: sshd@10-10.200.8.13:22-10.200.16.10:56192.service: Deactivated successfully. Sep 12 10:18:23.785741 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 10:18:23.787048 systemd-logind[1727]: Session 13 logged out. Waiting for processes to exit. Sep 12 10:18:23.788240 systemd-logind[1727]: Removed session 13. Sep 12 10:18:28.905376 systemd[1]: Started sshd@11-10.200.8.13:22-10.200.16.10:56208.service - OpenSSH per-connection server daemon (10.200.16.10:56208). Sep 12 10:18:29.534755 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 56208 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:29.536219 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:29.540886 systemd-logind[1727]: New session 14 of user core. Sep 12 10:18:29.550145 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 10:18:30.051244 sshd[4850]: Connection closed by 10.200.16.10 port 56208 Sep 12 10:18:30.052141 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:30.055189 systemd[1]: sshd@11-10.200.8.13:22-10.200.16.10:56208.service: Deactivated successfully. Sep 12 10:18:30.057723 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 10:18:30.059773 systemd-logind[1727]: Session 14 logged out. Waiting for processes to exit. Sep 12 10:18:30.061125 systemd-logind[1727]: Removed session 14. Sep 12 10:18:30.168310 systemd[1]: Started sshd@12-10.200.8.13:22-10.200.16.10:37474.service - OpenSSH per-connection server daemon (10.200.16.10:37474). Sep 12 10:18:30.791155 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 37474 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:30.792607 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:30.798146 systemd-logind[1727]: New session 15 of user core. Sep 12 10:18:30.807166 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 10:18:31.340792 sshd[4865]: Connection closed by 10.200.16.10 port 37474 Sep 12 10:18:31.342582 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:31.345531 systemd[1]: sshd@12-10.200.8.13:22-10.200.16.10:37474.service: Deactivated successfully. Sep 12 10:18:31.348460 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 10:18:31.350115 systemd-logind[1727]: Session 15 logged out. Waiting for processes to exit. Sep 12 10:18:31.351735 systemd-logind[1727]: Removed session 15. Sep 12 10:18:31.452096 systemd[1]: Started sshd@13-10.200.8.13:22-10.200.16.10:37488.service - OpenSSH per-connection server daemon (10.200.16.10:37488). Sep 12 10:18:32.082005 sshd[4877]: Accepted publickey for core from 10.200.16.10 port 37488 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:32.083564 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:32.088832 systemd-logind[1727]: New session 16 of user core. Sep 12 10:18:32.096128 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 10:18:32.601467 sshd[4879]: Connection closed by 10.200.16.10 port 37488 Sep 12 10:18:32.603156 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:32.607723 systemd[1]: sshd@13-10.200.8.13:22-10.200.16.10:37488.service: Deactivated successfully. Sep 12 10:18:32.612995 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 10:18:32.613862 systemd-logind[1727]: Session 16 logged out. Waiting for processes to exit. Sep 12 10:18:32.614980 systemd-logind[1727]: Removed session 16. Sep 12 10:18:37.724291 systemd[1]: Started sshd@14-10.200.8.13:22-10.200.16.10:37490.service - OpenSSH per-connection server daemon (10.200.16.10:37490). Sep 12 10:18:38.348437 sshd[4891]: Accepted publickey for core from 10.200.16.10 port 37490 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:38.349847 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:38.354397 systemd-logind[1727]: New session 17 of user core. Sep 12 10:18:38.361107 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 10:18:38.852590 sshd[4893]: Connection closed by 10.200.16.10 port 37490 Sep 12 10:18:38.853363 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:38.857409 systemd[1]: sshd@14-10.200.8.13:22-10.200.16.10:37490.service: Deactivated successfully. Sep 12 10:18:38.859785 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 10:18:38.860857 systemd-logind[1727]: Session 17 logged out. Waiting for processes to exit. Sep 12 10:18:38.862277 systemd-logind[1727]: Removed session 17. Sep 12 10:18:38.976832 systemd[1]: Started sshd@15-10.200.8.13:22-10.200.16.10:37498.service - OpenSSH per-connection server daemon (10.200.16.10:37498). Sep 12 10:18:39.599413 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 37498 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:39.600864 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:39.605286 systemd-logind[1727]: New session 18 of user core. Sep 12 10:18:39.611245 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 10:18:40.163499 sshd[4907]: Connection closed by 10.200.16.10 port 37498 Sep 12 10:18:40.164343 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:40.168471 systemd-logind[1727]: Session 18 logged out. Waiting for processes to exit. Sep 12 10:18:40.169335 systemd[1]: sshd@15-10.200.8.13:22-10.200.16.10:37498.service: Deactivated successfully. Sep 12 10:18:40.172011 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 10:18:40.173687 systemd-logind[1727]: Removed session 18. Sep 12 10:18:40.283339 systemd[1]: Started sshd@16-10.200.8.13:22-10.200.16.10:48860.service - OpenSSH per-connection server daemon (10.200.16.10:48860). Sep 12 10:18:40.904487 sshd[4917]: Accepted publickey for core from 10.200.16.10 port 48860 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:40.906157 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:40.910487 systemd-logind[1727]: New session 19 of user core. Sep 12 10:18:40.915120 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 10:18:41.926321 sshd[4919]: Connection closed by 10.200.16.10 port 48860 Sep 12 10:18:41.927156 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:41.931310 systemd[1]: sshd@16-10.200.8.13:22-10.200.16.10:48860.service: Deactivated successfully. Sep 12 10:18:41.935128 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 10:18:41.937100 systemd-logind[1727]: Session 19 logged out. Waiting for processes to exit. Sep 12 10:18:41.938220 systemd-logind[1727]: Removed session 19. Sep 12 10:18:42.053448 systemd[1]: Started sshd@17-10.200.8.13:22-10.200.16.10:48864.service - OpenSSH per-connection server daemon (10.200.16.10:48864). Sep 12 10:18:42.675007 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 48864 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:42.676747 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:42.682558 systemd-logind[1727]: New session 20 of user core. Sep 12 10:18:42.687153 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 10:18:43.294049 sshd[4938]: Connection closed by 10.200.16.10 port 48864 Sep 12 10:18:43.294786 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:43.298901 systemd[1]: sshd@17-10.200.8.13:22-10.200.16.10:48864.service: Deactivated successfully. Sep 12 10:18:43.301350 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 10:18:43.302382 systemd-logind[1727]: Session 20 logged out. Waiting for processes to exit. Sep 12 10:18:43.303370 systemd-logind[1727]: Removed session 20. Sep 12 10:18:43.409286 systemd[1]: Started sshd@18-10.200.8.13:22-10.200.16.10:48866.service - OpenSSH per-connection server daemon (10.200.16.10:48866). Sep 12 10:18:44.031591 sshd[4948]: Accepted publickey for core from 10.200.16.10 port 48866 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:44.033209 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:44.038496 systemd-logind[1727]: New session 21 of user core. Sep 12 10:18:44.046148 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 10:18:44.532463 sshd[4950]: Connection closed by 10.200.16.10 port 48866 Sep 12 10:18:44.533228 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:44.536607 systemd[1]: sshd@18-10.200.8.13:22-10.200.16.10:48866.service: Deactivated successfully. Sep 12 10:18:44.539080 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 10:18:44.541130 systemd-logind[1727]: Session 21 logged out. Waiting for processes to exit. Sep 12 10:18:44.542185 systemd-logind[1727]: Removed session 21. Sep 12 10:18:49.649311 systemd[1]: Started sshd@19-10.200.8.13:22-10.200.16.10:48878.service - OpenSSH per-connection server daemon (10.200.16.10:48878). Sep 12 10:18:50.272036 sshd[4964]: Accepted publickey for core from 10.200.16.10 port 48878 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:50.273487 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:50.278837 systemd-logind[1727]: New session 22 of user core. Sep 12 10:18:50.288131 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 10:18:50.774113 sshd[4966]: Connection closed by 10.200.16.10 port 48878 Sep 12 10:18:50.774891 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:50.778766 systemd[1]: sshd@19-10.200.8.13:22-10.200.16.10:48878.service: Deactivated successfully. Sep 12 10:18:50.780852 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 10:18:50.782082 systemd-logind[1727]: Session 22 logged out. Waiting for processes to exit. Sep 12 10:18:50.783071 systemd-logind[1727]: Removed session 22. Sep 12 10:18:55.891590 systemd[1]: Started sshd@20-10.200.8.13:22-10.200.16.10:58794.service - OpenSSH per-connection server daemon (10.200.16.10:58794). Sep 12 10:18:56.520681 sshd[4978]: Accepted publickey for core from 10.200.16.10 port 58794 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:56.522276 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:56.526583 systemd-logind[1727]: New session 23 of user core. Sep 12 10:18:56.536136 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 10:18:57.020193 sshd[4980]: Connection closed by 10.200.16.10 port 58794 Sep 12 10:18:57.021073 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:57.025357 systemd-logind[1727]: Session 23 logged out. Waiting for processes to exit. Sep 12 10:18:57.026174 systemd[1]: sshd@20-10.200.8.13:22-10.200.16.10:58794.service: Deactivated successfully. Sep 12 10:18:57.028431 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 10:18:57.029488 systemd-logind[1727]: Removed session 23. Sep 12 10:18:57.136294 systemd[1]: Started sshd@21-10.200.8.13:22-10.200.16.10:58796.service - OpenSSH per-connection server daemon (10.200.16.10:58796). Sep 12 10:18:57.760530 sshd[4992]: Accepted publickey for core from 10.200.16.10 port 58796 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:18:57.762036 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:57.766449 systemd-logind[1727]: New session 24 of user core. Sep 12 10:18:57.772103 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 10:18:59.415336 containerd[1747]: time="2025-09-12T10:18:59.414993430Z" level=info msg="StopContainer for \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\" with timeout 30 (s)" Sep 12 10:18:59.418523 containerd[1747]: time="2025-09-12T10:18:59.418042966Z" level=info msg="Stop container \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\" with signal terminated" Sep 12 10:18:59.476104 containerd[1747]: time="2025-09-12T10:18:59.476048543Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:18:59.496584 systemd[1]: cri-containerd-0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460.scope: Deactivated successfully. Sep 12 10:18:59.499070 containerd[1747]: time="2025-09-12T10:18:59.498813409Z" level=info msg="StopContainer for \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\" with timeout 2 (s)" Sep 12 10:18:59.499337 containerd[1747]: time="2025-09-12T10:18:59.499296914Z" level=info msg="Stop container \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\" with signal terminated" Sep 12 10:18:59.514285 systemd-networkd[1580]: lxc_health: Link DOWN Sep 12 10:18:59.514298 systemd-networkd[1580]: lxc_health: Lost carrier Sep 12 10:18:59.531380 systemd[1]: cri-containerd-99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc.scope: Deactivated successfully. Sep 12 10:18:59.532224 systemd[1]: cri-containerd-99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc.scope: Consumed 7.343s CPU time, 124.1M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 10:18:59.546676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460-rootfs.mount: Deactivated successfully. Sep 12 10:18:59.568818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc-rootfs.mount: Deactivated successfully. Sep 12 10:18:59.600725 containerd[1747]: time="2025-09-12T10:18:59.600618497Z" level=info msg="shim disconnected" id=0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460 namespace=k8s.io Sep 12 10:18:59.600725 containerd[1747]: time="2025-09-12T10:18:59.600698097Z" level=warning msg="cleaning up after shim disconnected" id=0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460 namespace=k8s.io Sep 12 10:18:59.600725 containerd[1747]: time="2025-09-12T10:18:59.600718098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:59.603244 containerd[1747]: time="2025-09-12T10:18:59.600672897Z" level=info msg="shim disconnected" id=99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc namespace=k8s.io Sep 12 10:18:59.603244 containerd[1747]: time="2025-09-12T10:18:59.601137003Z" level=warning msg="cleaning up after shim disconnected" id=99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc namespace=k8s.io Sep 12 10:18:59.603244 containerd[1747]: time="2025-09-12T10:18:59.601148703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:59.633434 containerd[1747]: time="2025-09-12T10:18:59.633385979Z" level=info msg="StopContainer for \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\" returns successfully" Sep 12 10:18:59.634088 containerd[1747]: time="2025-09-12T10:18:59.634056287Z" level=info msg="StopPodSandbox for \"36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630\"" Sep 12 10:18:59.634263 containerd[1747]: time="2025-09-12T10:18:59.634213089Z" level=info msg="Container to stop \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:18:59.634793 containerd[1747]: time="2025-09-12T10:18:59.634620093Z" level=info msg="StopContainer for \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\" returns successfully" Sep 12 10:18:59.635302 containerd[1747]: time="2025-09-12T10:18:59.635202400Z" level=info msg="StopPodSandbox for \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\"" Sep 12 10:18:59.635520 containerd[1747]: time="2025-09-12T10:18:59.635455203Z" level=info msg="Container to stop \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:18:59.635625 containerd[1747]: time="2025-09-12T10:18:59.635608205Z" level=info msg="Container to stop \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:18:59.635770 containerd[1747]: time="2025-09-12T10:18:59.635687406Z" level=info msg="Container to stop \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:18:59.635770 containerd[1747]: time="2025-09-12T10:18:59.635703606Z" level=info msg="Container to stop \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:18:59.635770 containerd[1747]: time="2025-09-12T10:18:59.635717806Z" level=info msg="Container to stop \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:18:59.638324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630-shm.mount: Deactivated successfully. Sep 12 10:18:59.638460 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5-shm.mount: Deactivated successfully. Sep 12 10:18:59.649605 systemd[1]: cri-containerd-36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630.scope: Deactivated successfully. Sep 12 10:18:59.653857 systemd[1]: cri-containerd-067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5.scope: Deactivated successfully. Sep 12 10:18:59.684386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630-rootfs.mount: Deactivated successfully. Sep 12 10:18:59.696549 containerd[1747]: time="2025-09-12T10:18:59.696480015Z" level=info msg="shim disconnected" id=36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630 namespace=k8s.io Sep 12 10:18:59.698859 containerd[1747]: time="2025-09-12T10:18:59.698819942Z" level=warning msg="cleaning up after shim disconnected" id=36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630 namespace=k8s.io Sep 12 10:18:59.698859 containerd[1747]: time="2025-09-12T10:18:59.698848843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:59.699271 containerd[1747]: time="2025-09-12T10:18:59.697795831Z" level=info msg="shim disconnected" id=067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5 namespace=k8s.io Sep 12 10:18:59.699271 containerd[1747]: time="2025-09-12T10:18:59.699133446Z" level=warning msg="cleaning up after shim disconnected" id=067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5 namespace=k8s.io Sep 12 10:18:59.699271 containerd[1747]: time="2025-09-12T10:18:59.699151646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:59.721131 containerd[1747]: time="2025-09-12T10:18:59.720834099Z" level=info msg="TearDown network for sandbox \"36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630\" successfully" Sep 12 10:18:59.721131 containerd[1747]: time="2025-09-12T10:18:59.720872400Z" level=info msg="StopPodSandbox for \"36b2abcc3da06c11bdb3a79693cbdee8be6b99357d7fdd700f44dd0f02456630\" returns successfully" Sep 12 10:18:59.721131 containerd[1747]: time="2025-09-12T10:18:59.720900900Z" level=info msg="TearDown network for sandbox \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" successfully" Sep 12 10:18:59.721131 containerd[1747]: time="2025-09-12T10:18:59.720915400Z" level=info msg="StopPodSandbox for \"067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5\" returns successfully" Sep 12 10:18:59.721586 kubelet[3384]: E0912 10:18:59.721546 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:18:59.792062 kubelet[3384]: I0912 10:18:59.792013 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-net\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792062 kubelet[3384]: I0912 10:18:59.792064 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-run\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792285 kubelet[3384]: I0912 10:18:59.792096 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-config-path\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792285 kubelet[3384]: I0912 10:18:59.792132 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cni-path\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792285 kubelet[3384]: I0912 10:18:59.792148 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-etc-cni-netd\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792285 kubelet[3384]: I0912 10:18:59.792172 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e473bc90-8bce-4e54-b4a6-49d1df19f643-clustermesh-secrets\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792285 kubelet[3384]: I0912 10:18:59.792196 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-hubble-tls\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792285 kubelet[3384]: I0912 10:18:59.792215 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-xtables-lock\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792522 kubelet[3384]: I0912 10:18:59.792233 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-kernel\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792522 kubelet[3384]: I0912 10:18:59.792260 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27be7af7-0b86-421b-a91f-75e015caf6fb-cilium-config-path\") pod \"27be7af7-0b86-421b-a91f-75e015caf6fb\" (UID: \"27be7af7-0b86-421b-a91f-75e015caf6fb\") " Sep 12 10:18:59.792522 kubelet[3384]: I0912 10:18:59.792287 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl9bx\" (UniqueName: \"kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-kube-api-access-dl9bx\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792522 kubelet[3384]: I0912 10:18:59.792306 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-hostproc\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792522 kubelet[3384]: I0912 10:18:59.792326 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-cgroup\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792522 kubelet[3384]: I0912 10:18:59.792351 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxrbm\" (UniqueName: \"kubernetes.io/projected/27be7af7-0b86-421b-a91f-75e015caf6fb-kube-api-access-jxrbm\") pod \"27be7af7-0b86-421b-a91f-75e015caf6fb\" (UID: \"27be7af7-0b86-421b-a91f-75e015caf6fb\") " Sep 12 10:18:59.792774 kubelet[3384]: I0912 10:18:59.792373 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-lib-modules\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792774 kubelet[3384]: I0912 10:18:59.792393 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-bpf-maps\") pod \"e473bc90-8bce-4e54-b4a6-49d1df19f643\" (UID: \"e473bc90-8bce-4e54-b4a6-49d1df19f643\") " Sep 12 10:18:59.792774 kubelet[3384]: I0912 10:18:59.792499 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.792774 kubelet[3384]: I0912 10:18:59.792546 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.792774 kubelet[3384]: I0912 10:18:59.792567 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.795079 kubelet[3384]: I0912 10:18:59.793029 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.795079 kubelet[3384]: I0912 10:18:59.793094 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cni-path" (OuterVolumeSpecName: "cni-path") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.795079 kubelet[3384]: I0912 10:18:59.793117 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.795709 kubelet[3384]: I0912 10:18:59.795661 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:18:59.797913 kubelet[3384]: I0912 10:18:59.797888 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.798429 kubelet[3384]: I0912 10:18:59.798405 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-hostproc" (OuterVolumeSpecName: "hostproc") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.798586 kubelet[3384]: I0912 10:18:59.798568 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.800304 kubelet[3384]: I0912 10:18:59.800280 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e473bc90-8bce-4e54-b4a6-49d1df19f643-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 10:18:59.800456 kubelet[3384]: I0912 10:18:59.800438 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:18:59.802788 kubelet[3384]: I0912 10:18:59.802761 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27be7af7-0b86-421b-a91f-75e015caf6fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27be7af7-0b86-421b-a91f-75e015caf6fb" (UID: "27be7af7-0b86-421b-a91f-75e015caf6fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:18:59.803164 kubelet[3384]: I0912 10:18:59.803137 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:18:59.803565 kubelet[3384]: I0912 10:18:59.803538 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-kube-api-access-dl9bx" (OuterVolumeSpecName: "kube-api-access-dl9bx") pod "e473bc90-8bce-4e54-b4a6-49d1df19f643" (UID: "e473bc90-8bce-4e54-b4a6-49d1df19f643"). InnerVolumeSpecName "kube-api-access-dl9bx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:18:59.804158 kubelet[3384]: I0912 10:18:59.804126 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27be7af7-0b86-421b-a91f-75e015caf6fb-kube-api-access-jxrbm" (OuterVolumeSpecName: "kube-api-access-jxrbm") pod "27be7af7-0b86-421b-a91f-75e015caf6fb" (UID: "27be7af7-0b86-421b-a91f-75e015caf6fb"). InnerVolumeSpecName "kube-api-access-jxrbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:18:59.893655 kubelet[3384]: I0912 10:18:59.893595 3384 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e473bc90-8bce-4e54-b4a6-49d1df19f643-clustermesh-secrets\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893655 kubelet[3384]: I0912 10:18:59.893638 3384 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-hubble-tls\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893655 kubelet[3384]: I0912 10:18:59.893651 3384 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-xtables-lock\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893655 kubelet[3384]: I0912 10:18:59.893666 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-kernel\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893681 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27be7af7-0b86-421b-a91f-75e015caf6fb-cilium-config-path\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893693 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dl9bx\" (UniqueName: \"kubernetes.io/projected/e473bc90-8bce-4e54-b4a6-49d1df19f643-kube-api-access-dl9bx\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893707 3384 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-hostproc\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893719 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-cgroup\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893730 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jxrbm\" (UniqueName: \"kubernetes.io/projected/27be7af7-0b86-421b-a91f-75e015caf6fb-kube-api-access-jxrbm\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893741 3384 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-lib-modules\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893752 3384 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-bpf-maps\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.893977 kubelet[3384]: I0912 10:18:59.893762 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-host-proc-sys-net\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.894173 kubelet[3384]: I0912 10:18:59.893772 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-run\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.894173 kubelet[3384]: I0912 10:18:59.893783 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e473bc90-8bce-4e54-b4a6-49d1df19f643-cilium-config-path\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.894173 kubelet[3384]: I0912 10:18:59.893793 3384 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-cni-path\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.894173 kubelet[3384]: I0912 10:18:59.893805 3384 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e473bc90-8bce-4e54-b4a6-49d1df19f643-etc-cni-netd\") on node \"ci-4230.2.2-n-6349f41dc3\" DevicePath \"\"" Sep 12 10:18:59.980421 kubelet[3384]: I0912 10:18:59.978676 3384 scope.go:117] "RemoveContainer" containerID="0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460" Sep 12 10:18:59.982281 containerd[1747]: time="2025-09-12T10:18:59.982122948Z" level=info msg="RemoveContainer for \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\"" Sep 12 10:18:59.989925 systemd[1]: Removed slice kubepods-besteffort-pod27be7af7_0b86_421b_a91f_75e015caf6fb.slice - libcontainer container kubepods-besteffort-pod27be7af7_0b86_421b_a91f_75e015caf6fb.slice. Sep 12 10:18:59.997034 systemd[1]: Removed slice kubepods-burstable-pode473bc90_8bce_4e54_b4a6_49d1df19f643.slice - libcontainer container kubepods-burstable-pode473bc90_8bce_4e54_b4a6_49d1df19f643.slice. Sep 12 10:18:59.997222 systemd[1]: kubepods-burstable-pode473bc90_8bce_4e54_b4a6_49d1df19f643.slice: Consumed 7.430s CPU time, 124.6M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 10:18:59.999331 containerd[1747]: time="2025-09-12T10:18:59.999265048Z" level=info msg="RemoveContainer for \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\" returns successfully" Sep 12 10:19:00.000379 kubelet[3384]: I0912 10:19:00.000181 3384 scope.go:117] "RemoveContainer" containerID="0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460" Sep 12 10:19:00.001263 containerd[1747]: time="2025-09-12T10:19:00.001225671Z" level=error msg="ContainerStatus for \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\": not found" Sep 12 10:19:00.002160 kubelet[3384]: E0912 10:19:00.002081 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\": not found" containerID="0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460" Sep 12 10:19:00.002285 kubelet[3384]: I0912 10:19:00.002146 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460"} err="failed to get container status \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c7416a1f90bdaa8ded79fd2300ebf54eabe649332fd2c6c49c878b050648460\": not found" Sep 12 10:19:00.002285 kubelet[3384]: I0912 10:19:00.002207 3384 scope.go:117] "RemoveContainer" containerID="99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc" Sep 12 10:19:00.005474 containerd[1747]: time="2025-09-12T10:19:00.005002715Z" level=info msg="RemoveContainer for \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\"" Sep 12 10:19:00.026820 containerd[1747]: time="2025-09-12T10:19:00.026754769Z" level=info msg="RemoveContainer for \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\" returns successfully" Sep 12 10:19:00.028198 kubelet[3384]: I0912 10:19:00.027998 3384 scope.go:117] "RemoveContainer" containerID="43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6" Sep 12 10:19:00.029671 containerd[1747]: time="2025-09-12T10:19:00.029373700Z" level=info msg="RemoveContainer for \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\"" Sep 12 10:19:00.040907 containerd[1747]: time="2025-09-12T10:19:00.040850934Z" level=info msg="RemoveContainer for \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\" returns successfully" Sep 12 10:19:00.041240 kubelet[3384]: I0912 10:19:00.041210 3384 scope.go:117] "RemoveContainer" containerID="57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170" Sep 12 10:19:00.042470 containerd[1747]: time="2025-09-12T10:19:00.042401452Z" level=info msg="RemoveContainer for \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\"" Sep 12 10:19:00.058041 containerd[1747]: time="2025-09-12T10:19:00.057996734Z" level=info msg="RemoveContainer for \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\" returns successfully" Sep 12 10:19:00.058332 kubelet[3384]: I0912 10:19:00.058301 3384 scope.go:117] "RemoveContainer" containerID="2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024" Sep 12 10:19:00.059752 containerd[1747]: time="2025-09-12T10:19:00.059449851Z" level=info msg="RemoveContainer for \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\"" Sep 12 10:19:00.067426 containerd[1747]: time="2025-09-12T10:19:00.067384143Z" level=info msg="RemoveContainer for \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\" returns successfully" Sep 12 10:19:00.067708 kubelet[3384]: I0912 10:19:00.067679 3384 scope.go:117] "RemoveContainer" containerID="e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482" Sep 12 10:19:00.068842 containerd[1747]: time="2025-09-12T10:19:00.068806860Z" level=info msg="RemoveContainer for \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\"" Sep 12 10:19:00.079905 containerd[1747]: time="2025-09-12T10:19:00.079576886Z" level=info msg="RemoveContainer for \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\" returns successfully" Sep 12 10:19:00.080135 kubelet[3384]: I0912 10:19:00.080103 3384 scope.go:117] "RemoveContainer" containerID="99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc" Sep 12 10:19:00.080356 containerd[1747]: time="2025-09-12T10:19:00.080323694Z" level=error msg="ContainerStatus for \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\": not found" Sep 12 10:19:00.080508 kubelet[3384]: E0912 10:19:00.080473 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\": not found" containerID="99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc" Sep 12 10:19:00.080574 kubelet[3384]: I0912 10:19:00.080510 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc"} err="failed to get container status \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"99a2c86972f74da1a9a1b304f2dd79afb88f1464a8aa54790ed00749d74b82bc\": not found" Sep 12 10:19:00.080574 kubelet[3384]: I0912 10:19:00.080538 3384 scope.go:117] "RemoveContainer" containerID="43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6" Sep 12 10:19:00.080747 containerd[1747]: time="2025-09-12T10:19:00.080715199Z" level=error msg="ContainerStatus for \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\": not found" Sep 12 10:19:00.080882 kubelet[3384]: E0912 10:19:00.080847 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\": not found" containerID="43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6" Sep 12 10:19:00.080941 kubelet[3384]: I0912 10:19:00.080877 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6"} err="failed to get container status \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"43ac8a10cad8252427c1cda27a6644ff3867da60c44aaa4808aafc9ad14cd3b6\": not found" Sep 12 10:19:00.080941 kubelet[3384]: I0912 10:19:00.080900 3384 scope.go:117] "RemoveContainer" containerID="57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170" Sep 12 10:19:00.081178 containerd[1747]: time="2025-09-12T10:19:00.081149904Z" level=error msg="ContainerStatus for \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\": not found" Sep 12 10:19:00.081284 kubelet[3384]: E0912 10:19:00.081258 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\": not found" containerID="57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170" Sep 12 10:19:00.081368 kubelet[3384]: I0912 10:19:00.081291 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170"} err="failed to get container status \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\": rpc error: code = NotFound desc = an error occurred when try to find container \"57c0ad75124b4055b1defecd77f09508b6e6d224f7941a51e3d6e831bc1c2170\": not found" Sep 12 10:19:00.081368 kubelet[3384]: I0912 10:19:00.081315 3384 scope.go:117] "RemoveContainer" containerID="2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024" Sep 12 10:19:00.082068 containerd[1747]: time="2025-09-12T10:19:00.082011314Z" level=error msg="ContainerStatus for \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\": not found" Sep 12 10:19:00.082187 kubelet[3384]: E0912 10:19:00.082119 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\": not found" containerID="2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024" Sep 12 10:19:00.082187 kubelet[3384]: I0912 10:19:00.082142 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024"} err="failed to get container status \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bb30423c603b2def138aeaeda880751e0ad7ebca88c04f4125ce47c6c56f024\": not found" Sep 12 10:19:00.082187 kubelet[3384]: I0912 10:19:00.082160 3384 scope.go:117] "RemoveContainer" containerID="e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482" Sep 12 10:19:00.082360 containerd[1747]: time="2025-09-12T10:19:00.082325518Z" level=error msg="ContainerStatus for \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\": not found" Sep 12 10:19:00.082493 kubelet[3384]: E0912 10:19:00.082452 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\": not found" containerID="e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482" Sep 12 10:19:00.082493 kubelet[3384]: I0912 10:19:00.082481 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482"} err="failed to get container status \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\": rpc error: code = NotFound desc = an error occurred when try to find container \"e026079fa10d135ab91257e6ba3eb8995b68cce1df3667206b90d30545932482\": not found" Sep 12 10:19:00.436642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-067358a4450d8fb92edc65bac5eb6eeb84729341b7a6bdafe74c5f8f65ffc7f5-rootfs.mount: Deactivated successfully. Sep 12 10:19:00.436793 systemd[1]: var-lib-kubelet-pods-27be7af7\x2d0b86\x2d421b\x2da91f\x2d75e015caf6fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djxrbm.mount: Deactivated successfully. Sep 12 10:19:00.436889 systemd[1]: var-lib-kubelet-pods-e473bc90\x2d8bce\x2d4e54\x2db4a6\x2d49d1df19f643-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddl9bx.mount: Deactivated successfully. Sep 12 10:19:00.437014 systemd[1]: var-lib-kubelet-pods-e473bc90\x2d8bce\x2d4e54\x2db4a6\x2d49d1df19f643-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 10:19:00.437111 systemd[1]: var-lib-kubelet-pods-e473bc90\x2d8bce\x2d4e54\x2db4a6\x2d49d1df19f643-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 10:19:00.599745 kubelet[3384]: I0912 10:19:00.599691 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27be7af7-0b86-421b-a91f-75e015caf6fb" path="/var/lib/kubelet/pods/27be7af7-0b86-421b-a91f-75e015caf6fb/volumes" Sep 12 10:19:00.600311 kubelet[3384]: I0912 10:19:00.600272 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e473bc90-8bce-4e54-b4a6-49d1df19f643" path="/var/lib/kubelet/pods/e473bc90-8bce-4e54-b4a6-49d1df19f643/volumes" Sep 12 10:19:01.462974 sshd[4994]: Connection closed by 10.200.16.10 port 58796 Sep 12 10:19:01.463761 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:01.467331 systemd[1]: sshd@21-10.200.8.13:22-10.200.16.10:58796.service: Deactivated successfully. Sep 12 10:19:01.469843 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 10:19:01.472178 systemd-logind[1727]: Session 24 logged out. Waiting for processes to exit. Sep 12 10:19:01.473517 systemd-logind[1727]: Removed session 24. Sep 12 10:19:01.594532 systemd[1]: Started sshd@22-10.200.8.13:22-10.200.16.10:37316.service - OpenSSH per-connection server daemon (10.200.16.10:37316). Sep 12 10:19:02.330385 sshd[5160]: Accepted publickey for core from 10.200.16.10 port 37316 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:19:02.331895 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:02.336561 systemd-logind[1727]: New session 25 of user core. Sep 12 10:19:02.346159 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 10:19:03.321429 systemd[1]: Created slice kubepods-burstable-podd0f253fb_5295_4ee6_8658_266769131fca.slice - libcontainer container kubepods-burstable-podd0f253fb_5295_4ee6_8658_266769131fca.slice. Sep 12 10:19:03.384293 sshd[5162]: Connection closed by 10.200.16.10 port 37316 Sep 12 10:19:03.385129 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:03.389122 systemd[1]: sshd@22-10.200.8.13:22-10.200.16.10:37316.service: Deactivated successfully. Sep 12 10:19:03.391512 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 10:19:03.392432 systemd-logind[1727]: Session 25 logged out. Waiting for processes to exit. Sep 12 10:19:03.393494 systemd-logind[1727]: Removed session 25. Sep 12 10:19:03.415882 kubelet[3384]: I0912 10:19:03.415830 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0f253fb-5295-4ee6-8658-266769131fca-hubble-tls\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.415882 kubelet[3384]: I0912 10:19:03.415876 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-lib-modules\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416418 kubelet[3384]: I0912 10:19:03.415903 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0f253fb-5295-4ee6-8658-266769131fca-cilium-ipsec-secrets\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416418 kubelet[3384]: I0912 10:19:03.415923 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-bpf-maps\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416418 kubelet[3384]: I0912 10:19:03.415943 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0f253fb-5295-4ee6-8658-266769131fca-cilium-config-path\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416418 kubelet[3384]: I0912 10:19:03.415979 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-hostproc\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416418 kubelet[3384]: I0912 10:19:03.415999 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-host-proc-sys-kernel\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416418 kubelet[3384]: I0912 10:19:03.416023 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-cilium-run\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416559 kubelet[3384]: I0912 10:19:03.416047 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-cni-path\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416559 kubelet[3384]: I0912 10:19:03.416070 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-etc-cni-netd\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416559 kubelet[3384]: I0912 10:19:03.416098 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-xtables-lock\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416559 kubelet[3384]: I0912 10:19:03.416121 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjblw\" (UniqueName: \"kubernetes.io/projected/d0f253fb-5295-4ee6-8658-266769131fca-kube-api-access-vjblw\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416559 kubelet[3384]: I0912 10:19:03.416148 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-cilium-cgroup\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416559 kubelet[3384]: I0912 10:19:03.416179 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0f253fb-5295-4ee6-8658-266769131fca-clustermesh-secrets\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.416700 kubelet[3384]: I0912 10:19:03.416206 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0f253fb-5295-4ee6-8658-266769131fca-host-proc-sys-net\") pod \"cilium-g8fs7\" (UID: \"d0f253fb-5295-4ee6-8658-266769131fca\") " pod="kube-system/cilium-g8fs7" Sep 12 10:19:03.512332 systemd[1]: Started sshd@23-10.200.8.13:22-10.200.16.10:37318.service - OpenSSH per-connection server daemon (10.200.16.10:37318). Sep 12 10:19:03.625802 containerd[1747]: time="2025-09-12T10:19:03.625756634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g8fs7,Uid:d0f253fb-5295-4ee6-8658-266769131fca,Namespace:kube-system,Attempt:0,}" Sep 12 10:19:03.671786 containerd[1747]: time="2025-09-12T10:19:03.671657188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:19:03.671786 containerd[1747]: time="2025-09-12T10:19:03.671725088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:19:03.671786 containerd[1747]: time="2025-09-12T10:19:03.671740788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:03.672195 containerd[1747]: time="2025-09-12T10:19:03.671843689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:03.694184 systemd[1]: Started cri-containerd-d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef.scope - libcontainer container d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef. Sep 12 10:19:03.718269 containerd[1747]: time="2025-09-12T10:19:03.717874144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g8fs7,Uid:d0f253fb-5295-4ee6-8658-266769131fca,Namespace:kube-system,Attempt:0,} returns sandbox id \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\"" Sep 12 10:19:03.726934 containerd[1747]: time="2025-09-12T10:19:03.726881074Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:19:03.754980 containerd[1747]: time="2025-09-12T10:19:03.754915669Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd\"" Sep 12 10:19:03.755778 containerd[1747]: time="2025-09-12T10:19:03.755741971Z" level=info msg="StartContainer for \"78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd\"" Sep 12 10:19:03.789169 systemd[1]: Started cri-containerd-78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd.scope - libcontainer container 78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd. Sep 12 10:19:03.822034 containerd[1747]: time="2025-09-12T10:19:03.821937394Z" level=info msg="StartContainer for \"78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd\" returns successfully" Sep 12 10:19:03.828140 systemd[1]: cri-containerd-78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd.scope: Deactivated successfully. Sep 12 10:19:03.901617 containerd[1747]: time="2025-09-12T10:19:03.901254561Z" level=info msg="shim disconnected" id=78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd namespace=k8s.io Sep 12 10:19:03.901617 containerd[1747]: time="2025-09-12T10:19:03.901335062Z" level=warning msg="cleaning up after shim disconnected" id=78aee972927cdc06393a690d0fc3b809345256927c3a85e43bf70fc5c17b01dd namespace=k8s.io Sep 12 10:19:03.901617 containerd[1747]: time="2025-09-12T10:19:03.901345962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:04.009819 containerd[1747]: time="2025-09-12T10:19:04.009728527Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:19:04.046233 containerd[1747]: time="2025-09-12T10:19:04.046182049Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6\"" Sep 12 10:19:04.046995 containerd[1747]: time="2025-09-12T10:19:04.046791751Z" level=info msg="StartContainer for \"70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6\"" Sep 12 10:19:04.074159 systemd[1]: Started cri-containerd-70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6.scope - libcontainer container 70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6. Sep 12 10:19:04.113371 containerd[1747]: time="2025-09-12T10:19:04.113131175Z" level=info msg="StartContainer for \"70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6\" returns successfully" Sep 12 10:19:04.118663 systemd[1]: cri-containerd-70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6.scope: Deactivated successfully. Sep 12 10:19:04.156726 containerd[1747]: time="2025-09-12T10:19:04.156557421Z" level=info msg="shim disconnected" id=70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6 namespace=k8s.io Sep 12 10:19:04.156726 containerd[1747]: time="2025-09-12T10:19:04.156627521Z" level=warning msg="cleaning up after shim disconnected" id=70bbb67557782600b09902ffacc0006c58cdb1d7a7676fc7542c31752b1105e6 namespace=k8s.io Sep 12 10:19:04.156726 containerd[1747]: time="2025-09-12T10:19:04.156638821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:04.210759 sshd[5173]: Accepted publickey for core from 10.200.16.10 port 37318 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:19:04.212320 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:04.217059 systemd-logind[1727]: New session 26 of user core. Sep 12 10:19:04.230160 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 10:19:04.652154 sshd[5344]: Connection closed by 10.200.16.10 port 37318 Sep 12 10:19:04.652939 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:04.656240 systemd[1]: sshd@23-10.200.8.13:22-10.200.16.10:37318.service: Deactivated successfully. Sep 12 10:19:04.658643 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 10:19:04.660796 systemd-logind[1727]: Session 26 logged out. Waiting for processes to exit. Sep 12 10:19:04.662356 systemd-logind[1727]: Removed session 26. Sep 12 10:19:04.722695 kubelet[3384]: E0912 10:19:04.722642 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:19:04.768291 systemd[1]: Started sshd@24-10.200.8.13:22-10.200.16.10:37320.service - OpenSSH per-connection server daemon (10.200.16.10:37320). Sep 12 10:19:05.013917 containerd[1747]: time="2025-09-12T10:19:05.013528107Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:19:05.052882 containerd[1747]: time="2025-09-12T10:19:05.052831939Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f\"" Sep 12 10:19:05.053916 containerd[1747]: time="2025-09-12T10:19:05.053873943Z" level=info msg="StartContainer for \"b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f\"" Sep 12 10:19:05.091173 systemd[1]: Started cri-containerd-b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f.scope - libcontainer container b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f. Sep 12 10:19:05.121803 systemd[1]: cri-containerd-b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f.scope: Deactivated successfully. Sep 12 10:19:05.126062 containerd[1747]: time="2025-09-12T10:19:05.126021286Z" level=info msg="StartContainer for \"b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f\" returns successfully" Sep 12 10:19:05.150835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f-rootfs.mount: Deactivated successfully. Sep 12 10:19:05.163266 containerd[1747]: time="2025-09-12T10:19:05.163197911Z" level=info msg="shim disconnected" id=b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f namespace=k8s.io Sep 12 10:19:05.163266 containerd[1747]: time="2025-09-12T10:19:05.163261511Z" level=warning msg="cleaning up after shim disconnected" id=b6d79871255d09b446d8b5a99d8cd196fd9ca47720fac0441fb55872d0faa94f namespace=k8s.io Sep 12 10:19:05.163266 containerd[1747]: time="2025-09-12T10:19:05.163275111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:05.390772 sshd[5352]: Accepted publickey for core from 10.200.16.10 port 37320 ssh2: RSA SHA256:r6EXdlrmYy16/qU1z8eNEnbT4f+dJX2z9SgUoSFmsI4 Sep 12 10:19:05.392340 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:05.398280 systemd-logind[1727]: New session 27 of user core. Sep 12 10:19:05.401161 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 10:19:06.022225 containerd[1747]: time="2025-09-12T10:19:06.022182503Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:19:06.058199 containerd[1747]: time="2025-09-12T10:19:06.058145825Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86\"" Sep 12 10:19:06.058985 containerd[1747]: time="2025-09-12T10:19:06.058829027Z" level=info msg="StartContainer for \"14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86\"" Sep 12 10:19:06.098135 systemd[1]: Started cri-containerd-14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86.scope - libcontainer container 14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86. Sep 12 10:19:06.123648 systemd[1]: cri-containerd-14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86.scope: Deactivated successfully. Sep 12 10:19:06.128537 containerd[1747]: time="2025-09-12T10:19:06.127800159Z" level=info msg="StartContainer for \"14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86\" returns successfully" Sep 12 10:19:06.149884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86-rootfs.mount: Deactivated successfully. Sep 12 10:19:06.161144 containerd[1747]: time="2025-09-12T10:19:06.161068971Z" level=info msg="shim disconnected" id=14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86 namespace=k8s.io Sep 12 10:19:06.161144 containerd[1747]: time="2025-09-12T10:19:06.161139371Z" level=warning msg="cleaning up after shim disconnected" id=14d5c006c16172825c7e5ba92bd3cf6a8dac8f5435b367e8e7377fbde5645f86 namespace=k8s.io Sep 12 10:19:06.161612 containerd[1747]: time="2025-09-12T10:19:06.161156471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:06.599528 kubelet[3384]: E0912 10:19:06.598350 3384 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-cr4rd" podUID="0ff64da6-78b8-40f2-9e37-d5f9f259769a" Sep 12 10:19:07.020649 containerd[1747]: time="2025-09-12T10:19:07.020519765Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:19:07.069147 containerd[1747]: time="2025-09-12T10:19:07.068987428Z" level=info msg="CreateContainer within sandbox \"d41c591a4bd9a8dc5fe0b345b52c82a3236ab7c4f7be03a26c497da6131636ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657\"" Sep 12 10:19:07.070599 containerd[1747]: time="2025-09-12T10:19:07.070562034Z" level=info msg="StartContainer for \"6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657\"" Sep 12 10:19:07.132619 systemd[1]: Started cri-containerd-6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657.scope - libcontainer container 6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657. Sep 12 10:19:07.188028 containerd[1747]: time="2025-09-12T10:19:07.185735722Z" level=info msg="StartContainer for \"6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657\" returns successfully" Sep 12 10:19:07.359926 kubelet[3384]: I0912 10:19:07.358699 3384 setters.go:618] "Node became not ready" node="ci-4230.2.2-n-6349f41dc3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T10:19:07Z","lastTransitionTime":"2025-09-12T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 10:19:07.727995 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 10:19:08.035785 kubelet[3384]: I0912 10:19:08.035718 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g8fs7" podStartSLOduration=5.035686087 podStartE2EDuration="5.035686087s" podCreationTimestamp="2025-09-12 10:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:19:08.034647576 +0000 UTC m=+163.529861009" watchObservedRunningTime="2025-09-12 10:19:08.035686087 +0000 UTC m=+163.530899520" Sep 12 10:19:08.048065 systemd[1]: run-containerd-runc-k8s.io-6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657-runc.7WDHvG.mount: Deactivated successfully. Sep 12 10:19:08.598691 kubelet[3384]: E0912 10:19:08.598213 3384 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-cr4rd" podUID="0ff64da6-78b8-40f2-9e37-d5f9f259769a" Sep 12 10:19:09.957779 systemd[1]: run-containerd-runc-k8s.io-6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657-runc.CxU98o.mount: Deactivated successfully. Sep 12 10:19:10.691122 systemd-networkd[1580]: lxc_health: Link UP Sep 12 10:19:10.725011 systemd-networkd[1580]: lxc_health: Gained carrier Sep 12 10:19:12.076191 systemd-networkd[1580]: lxc_health: Gained IPv6LL Sep 12 10:19:12.141627 systemd[1]: run-containerd-runc-k8s.io-6f7fc40206e20315ba5d700aff3b17104473883a400535f7f52ae944a84a2657-runc.nAaGJL.mount: Deactivated successfully. Sep 12 10:19:12.256207 kubelet[3384]: E0912 10:19:12.256156 3384 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33106->127.0.0.1:38161: write tcp 127.0.0.1:33106->127.0.0.1:38161: write: broken pipe Sep 12 10:19:16.642092 sshd[5411]: Connection closed by 10.200.16.10 port 37320 Sep 12 10:19:16.643091 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:16.647269 systemd[1]: sshd@24-10.200.8.13:22-10.200.16.10:37320.service: Deactivated successfully. Sep 12 10:19:16.650403 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 10:19:16.652203 systemd-logind[1727]: Session 27 logged out. Waiting for processes to exit. Sep 12 10:19:16.653374 systemd-logind[1727]: Removed session 27.