Nov 6 23:40:24.138070 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 22:02:38 -00 2025 Nov 6 23:40:24.138110 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:40:24.138124 kernel: BIOS-provided physical RAM map: Nov 6 23:40:24.138135 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 23:40:24.138145 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 6 23:40:24.138155 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 6 23:40:24.138167 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Nov 6 23:40:24.138179 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable Nov 6 23:40:24.138194 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data Nov 6 23:40:24.138205 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 6 23:40:24.138217 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 6 23:40:24.138229 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 6 23:40:24.138240 kernel: printk: bootconsole [earlyser0] enabled Nov 6 23:40:24.138252 kernel: NX (Execute Disable) protection: active Nov 6 23:40:24.138269 kernel: APIC: Static calls initialized Nov 6 23:40:24.138281 kernel: efi: EFI v2.7 by Microsoft Nov 6 23:40:24.138294 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebf5a98 RNG=0x3ffd2018 Nov 6 23:40:24.138306 kernel: random: crng init done Nov 6 23:40:24.138318 kernel: secureboot: Secure boot disabled Nov 6 23:40:24.138330 kernel: SMBIOS 3.1.0 present. Nov 6 23:40:24.138342 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 6 23:40:24.138354 kernel: Hypervisor detected: Microsoft Hyper-V Nov 6 23:40:24.138366 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 6 23:40:24.138379 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 6 23:40:24.138394 kernel: Hyper-V: Nested features: 0x1e0101 Nov 6 23:40:24.138407 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 6 23:40:24.138419 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 6 23:40:24.138431 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 23:40:24.138444 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 23:40:24.138457 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 6 23:40:24.138469 kernel: tsc: Detected 2593.906 MHz processor Nov 6 23:40:24.138483 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 23:40:24.138496 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 23:40:24.138509 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 6 23:40:24.138527 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 23:40:24.138542 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 23:40:24.138555 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 6 23:40:24.138569 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 6 23:40:24.138581 kernel: Using GB pages for direct mapping Nov 6 23:40:24.138594 kernel: ACPI: Early table checksum verification disabled Nov 6 23:40:24.138612 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 6 23:40:24.138630 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138645 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138660 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 6 23:40:24.138675 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 6 23:40:24.138688 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138702 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138719 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138733 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138747 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138762 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 23:40:24.138776 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 6 23:40:24.138791 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Nov 6 23:40:24.138806 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 6 23:40:24.138820 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 6 23:40:24.138833 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 6 23:40:24.138851 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 6 23:40:24.138866 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 6 23:40:24.138882 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 6 23:40:24.138897 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 6 23:40:24.138912 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 6 23:40:24.138927 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 6 23:40:24.138940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 6 23:40:24.138953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 6 23:40:24.138967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 6 23:40:24.138984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 6 23:40:24.138998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 6 23:40:24.139012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 6 23:40:24.139025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 6 23:40:24.139039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 6 23:40:24.146569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 6 23:40:24.146596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 6 23:40:24.146611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 6 23:40:24.146625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 6 23:40:24.146645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 6 23:40:24.146658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 6 23:40:24.146672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 6 23:40:24.146685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 6 23:40:24.146700 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 6 23:40:24.146714 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 6 23:40:24.146728 kernel: Zone ranges: Nov 6 23:40:24.146742 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 23:40:24.146759 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 6 23:40:24.146771 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 23:40:24.146784 kernel: Movable zone start for each node Nov 6 23:40:24.146798 kernel: Early memory node ranges Nov 6 23:40:24.146811 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 6 23:40:24.146825 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 6 23:40:24.146839 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] Nov 6 23:40:24.146852 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 6 23:40:24.146866 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 23:40:24.146880 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 6 23:40:24.146897 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 23:40:24.146911 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 6 23:40:24.146925 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Nov 6 23:40:24.146938 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges Nov 6 23:40:24.146952 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 6 23:40:24.146972 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 6 23:40:24.146986 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 6 23:40:24.146999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 23:40:24.147013 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 23:40:24.147031 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 6 23:40:24.147044 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 6 23:40:24.147071 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 6 23:40:24.147091 kernel: Booting paravirtualized kernel on Hyper-V Nov 6 23:40:24.147109 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 23:40:24.147126 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 23:40:24.147143 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 6 23:40:24.147160 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 6 23:40:24.147183 kernel: pcpu-alloc: [0] 0 1 Nov 6 23:40:24.147199 kernel: Hyper-V: PV spinlocks enabled Nov 6 23:40:24.147216 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 23:40:24.147236 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:40:24.147254 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 6 23:40:24.147271 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 23:40:24.147288 kernel: Fallback order for Node 0: 0 Nov 6 23:40:24.147305 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062374 Nov 6 23:40:24.147322 kernel: Policy zone: Normal Nov 6 23:40:24.147356 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:40:24.147374 kernel: software IO TLB: area num 2. Nov 6 23:40:24.147396 kernel: Memory: 8072560K/8387508K available (14336K kernel code, 2288K rwdata, 22872K rodata, 43520K init, 1560K bss, 314692K reserved, 0K cma-reserved) Nov 6 23:40:24.147414 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 23:40:24.147431 kernel: ftrace: allocating 37954 entries in 149 pages Nov 6 23:40:24.147449 kernel: ftrace: allocated 149 pages with 4 groups Nov 6 23:40:24.147467 kernel: Dynamic Preempt: voluntary Nov 6 23:40:24.147485 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:40:24.147505 kernel: rcu: RCU event tracing is enabled. Nov 6 23:40:24.147527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 23:40:24.147546 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:40:24.147565 kernel: Rude variant of Tasks RCU enabled. Nov 6 23:40:24.147584 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:40:24.147603 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:40:24.147622 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 23:40:24.147641 kernel: Using NULL legacy PIC Nov 6 23:40:24.147663 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 6 23:40:24.147683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:40:24.147701 kernel: Console: colour dummy device 80x25 Nov 6 23:40:24.147718 kernel: printk: console [tty1] enabled Nov 6 23:40:24.147733 kernel: printk: console [ttyS0] enabled Nov 6 23:40:24.147746 kernel: printk: bootconsole [earlyser0] disabled Nov 6 23:40:24.147758 kernel: ACPI: Core revision 20230628 Nov 6 23:40:24.147772 kernel: Failed to register legacy timer interrupt Nov 6 23:40:24.147786 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 23:40:24.147804 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 6 23:40:24.147818 kernel: Hyper-V: Using IPI hypercalls Nov 6 23:40:24.147830 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 6 23:40:24.147844 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 6 23:40:24.147859 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 6 23:40:24.147873 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 6 23:40:24.147888 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 6 23:40:24.147903 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 6 23:40:24.147918 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Nov 6 23:40:24.147936 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 6 23:40:24.147951 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 6 23:40:24.147966 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 23:40:24.147980 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 23:40:24.147994 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 23:40:24.148009 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 6 23:40:24.148023 kernel: RETBleed: Vulnerable Nov 6 23:40:24.148038 kernel: Speculative Store Bypass: Vulnerable Nov 6 23:40:24.148103 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 23:40:24.148119 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 23:40:24.148133 kernel: active return thunk: its_return_thunk Nov 6 23:40:24.148152 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 23:40:24.148166 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 23:40:24.148181 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 23:40:24.148195 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 23:40:24.148210 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 6 23:40:24.148224 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 6 23:40:24.148238 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 6 23:40:24.148252 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 23:40:24.148267 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 6 23:40:24.148281 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 6 23:40:24.148296 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 6 23:40:24.148313 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 6 23:40:24.148328 kernel: Freeing SMP alternatives memory: 32K Nov 6 23:40:24.148342 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:40:24.148357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 6 23:40:24.148371 kernel: landlock: Up and running. Nov 6 23:40:24.148385 kernel: SELinux: Initializing. Nov 6 23:40:24.148399 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 23:40:24.148414 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 23:40:24.148429 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 6 23:40:24.148443 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:40:24.148458 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:40:24.148476 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:40:24.148491 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 6 23:40:24.148505 kernel: signal: max sigframe size: 3632 Nov 6 23:40:24.148520 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:40:24.148535 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:40:24.148550 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 23:40:24.148565 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:40:24.148579 kernel: smpboot: x86: Booting SMP configuration: Nov 6 23:40:24.148594 kernel: .... node #0, CPUs: #1 Nov 6 23:40:24.148612 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 6 23:40:24.148628 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 6 23:40:24.148642 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 23:40:24.148657 kernel: smpboot: Max logical packages: 1 Nov 6 23:40:24.148672 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 6 23:40:24.148687 kernel: devtmpfs: initialized Nov 6 23:40:24.148701 kernel: x86/mm: Memory block size: 128MB Nov 6 23:40:24.148716 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 6 23:40:24.148734 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:40:24.148749 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 23:40:24.148764 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:40:24.148778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:40:24.148792 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:40:24.148807 kernel: audit: type=2000 audit(1762472422.030:1): state=initialized audit_enabled=0 res=1 Nov 6 23:40:24.148822 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:40:24.148837 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 23:40:24.148852 kernel: cpuidle: using governor menu Nov 6 23:40:24.148870 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:40:24.148885 kernel: dca service started, version 1.12.1 Nov 6 23:40:24.148900 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 6 23:40:24.148914 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] Nov 6 23:40:24.148928 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 23:40:24.148943 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 23:40:24.148958 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 23:40:24.148973 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:40:24.148988 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:40:24.149006 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:40:24.149021 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:40:24.149035 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:40:24.149050 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:40:24.149083 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 6 23:40:24.149097 kernel: ACPI: Interpreter enabled Nov 6 23:40:24.149113 kernel: ACPI: PM: (supports S0 S5) Nov 6 23:40:24.149130 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 23:40:24.149148 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 23:40:24.149171 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 6 23:40:24.149187 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 6 23:40:24.149204 kernel: iommu: Default domain type: Translated Nov 6 23:40:24.149221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 23:40:24.149237 kernel: efivars: Registered efivars operations Nov 6 23:40:24.149254 kernel: PCI: Using ACPI for IRQ routing Nov 6 23:40:24.149271 kernel: PCI: System does not support PCI Nov 6 23:40:24.149287 kernel: vgaarb: loaded Nov 6 23:40:24.149303 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 6 23:40:24.149325 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:40:24.149341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:40:24.149357 kernel: pnp: PnP ACPI init Nov 6 23:40:24.149374 kernel: pnp: PnP ACPI: found 3 devices Nov 6 23:40:24.149390 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 23:40:24.149407 kernel: NET: Registered PF_INET protocol family Nov 6 23:40:24.149423 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 23:40:24.149440 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 6 23:40:24.149457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:40:24.149478 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 23:40:24.149494 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 6 23:40:24.149510 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 6 23:40:24.149526 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 23:40:24.149542 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 23:40:24.149557 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:40:24.149573 kernel: NET: Registered PF_XDP protocol family Nov 6 23:40:24.149589 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:40:24.149606 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 23:40:24.149627 kernel: software IO TLB: mapped [mem 0x000000003abf5000-0x000000003ebf5000] (64MB) Nov 6 23:40:24.149643 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 23:40:24.149659 kernel: Initialise system trusted keyrings Nov 6 23:40:24.149677 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 6 23:40:24.149694 kernel: Key type asymmetric registered Nov 6 23:40:24.149710 kernel: Asymmetric key parser 'x509' registered Nov 6 23:40:24.149726 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 6 23:40:24.149743 kernel: io scheduler mq-deadline registered Nov 6 23:40:24.149759 kernel: io scheduler kyber registered Nov 6 23:40:24.149780 kernel: io scheduler bfq registered Nov 6 23:40:24.149797 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 23:40:24.149813 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:40:24.149830 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 23:40:24.149846 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 6 23:40:24.149863 kernel: i8042: PNP: No PS/2 controller found. Nov 6 23:40:24.154504 kernel: rtc_cmos 00:02: registered as rtc0 Nov 6 23:40:24.154659 kernel: rtc_cmos 00:02: setting system clock to 2025-11-06T23:40:23 UTC (1762472423) Nov 6 23:40:24.154840 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 6 23:40:24.154861 kernel: intel_pstate: CPU model not supported Nov 6 23:40:24.154876 kernel: efifb: probing for efifb Nov 6 23:40:24.154890 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 6 23:40:24.154904 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 6 23:40:24.154919 kernel: efifb: scrolling: redraw Nov 6 23:40:24.154933 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 23:40:24.154947 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 23:40:24.154966 kernel: fb0: EFI VGA frame buffer device Nov 6 23:40:24.154980 kernel: pstore: Using crash dump compression: deflate Nov 6 23:40:24.154994 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 23:40:24.155008 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:40:24.155023 kernel: Segment Routing with IPv6 Nov 6 23:40:24.155037 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:40:24.155073 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:40:24.155089 kernel: Key type dns_resolver registered Nov 6 23:40:24.155103 kernel: IPI shorthand broadcast: enabled Nov 6 23:40:24.155117 kernel: sched_clock: Marking stable (943140600, 53355400)->(1228591200, -232095200) Nov 6 23:40:24.155136 kernel: registered taskstats version 1 Nov 6 23:40:24.155151 kernel: Loading compiled-in X.509 certificates Nov 6 23:40:24.155165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: d06f6bc77ef9183fbb55ec1fc021fe2cce974996' Nov 6 23:40:24.155178 kernel: Key type .fscrypt registered Nov 6 23:40:24.155192 kernel: Key type fscrypt-provisioning registered Nov 6 23:40:24.155206 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:40:24.155220 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:40:24.155234 kernel: ima: No architecture policies found Nov 6 23:40:24.155248 kernel: clk: Disabling unused clocks Nov 6 23:40:24.155265 kernel: Freeing unused kernel image (initmem) memory: 43520K Nov 6 23:40:24.155280 kernel: Write protecting the kernel read-only data: 38912k Nov 6 23:40:24.155294 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Nov 6 23:40:24.155308 kernel: Run /init as init process Nov 6 23:40:24.155322 kernel: with arguments: Nov 6 23:40:24.155336 kernel: /init Nov 6 23:40:24.155350 kernel: with environment: Nov 6 23:40:24.155364 kernel: HOME=/ Nov 6 23:40:24.155377 kernel: TERM=linux Nov 6 23:40:24.155395 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:40:24.155414 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:40:24.155430 systemd[1]: Detected virtualization microsoft. Nov 6 23:40:24.155444 systemd[1]: Detected architecture x86-64. Nov 6 23:40:24.155458 systemd[1]: Running in initrd. Nov 6 23:40:24.155473 systemd[1]: No hostname configured, using default hostname. Nov 6 23:40:24.155489 systemd[1]: Hostname set to . Nov 6 23:40:24.155507 systemd[1]: Initializing machine ID from random generator. Nov 6 23:40:24.155522 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:40:24.155537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:40:24.155552 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:40:24.155568 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:40:24.155583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:40:24.155598 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:40:24.155618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:40:24.155635 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 23:40:24.155650 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 23:40:24.155665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:40:24.155680 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:40:24.155695 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:40:24.155710 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:40:24.155724 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:40:24.155743 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:40:24.155758 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:40:24.155773 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:40:24.155788 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:40:24.155803 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:40:24.155818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:40:24.155833 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:40:24.155848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:40:24.155864 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:40:24.155881 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:40:24.155896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:40:24.155911 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:40:24.155926 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:40:24.155941 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:40:24.155956 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:40:24.155971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:40:24.156012 systemd-journald[177]: Collecting audit messages is disabled. Nov 6 23:40:24.156049 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:40:24.158885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:40:24.158908 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:40:24.158926 systemd-journald[177]: Journal started Nov 6 23:40:24.158975 systemd-journald[177]: Runtime Journal (/run/log/journal/f8b27348caa14be582557cff5c16e3df) is 8M, max 158.8M, 150.8M free. Nov 6 23:40:24.129299 systemd-modules-load[178]: Inserted module 'overlay' Nov 6 23:40:24.172573 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:40:24.173451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:40:24.182503 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:40:24.182524 kernel: Bridge firewalling registered Nov 6 23:40:24.184654 systemd-modules-load[178]: Inserted module 'br_netfilter' Nov 6 23:40:24.190802 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:40:24.204328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:40:24.210195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:40:24.224234 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:40:24.234166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:40:24.242155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:40:24.250353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:40:24.253953 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:40:24.264870 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:40:24.277170 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:40:24.284038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:40:24.293262 dracut-cmdline[208]: dracut-dracut-053 Nov 6 23:40:24.296353 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:40:24.317345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:40:24.336524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:40:24.377515 systemd-resolved[210]: Positive Trust Anchors: Nov 6 23:40:24.377530 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:40:24.377589 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:40:24.406531 systemd-resolved[210]: Defaulting to hostname 'linux'. Nov 6 23:40:24.418553 kernel: SCSI subsystem initialized Nov 6 23:40:24.407736 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:40:24.415244 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:40:24.431075 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:40:24.442068 kernel: iscsi: registered transport (tcp) Nov 6 23:40:24.463710 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:40:24.463754 kernel: QLogic iSCSI HBA Driver Nov 6 23:40:24.500535 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:40:24.513258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:40:24.544409 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:40:24.544496 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:40:24.549074 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 6 23:40:24.588092 kernel: raid6: avx512x4 gen() 18547 MB/s Nov 6 23:40:24.608067 kernel: raid6: avx512x2 gen() 18464 MB/s Nov 6 23:40:24.627063 kernel: raid6: avx512x1 gen() 18569 MB/s Nov 6 23:40:24.646064 kernel: raid6: avx2x4 gen() 18509 MB/s Nov 6 23:40:24.666068 kernel: raid6: avx2x2 gen() 18517 MB/s Nov 6 23:40:24.686599 kernel: raid6: avx2x1 gen() 14037 MB/s Nov 6 23:40:24.686643 kernel: raid6: using algorithm avx512x1 gen() 18569 MB/s Nov 6 23:40:24.708483 kernel: raid6: .... xor() 26775 MB/s, rmw enabled Nov 6 23:40:24.708515 kernel: raid6: using avx512x2 recovery algorithm Nov 6 23:40:24.732081 kernel: xor: automatically using best checksumming function avx Nov 6 23:40:24.875078 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:40:24.884661 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:40:24.895299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:40:24.914146 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 6 23:40:24.919395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:40:24.934227 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:40:24.948693 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Nov 6 23:40:24.976033 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:40:24.984281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:40:25.027929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:40:25.046291 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:40:25.067589 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:40:25.068425 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:40:25.068967 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:40:25.073885 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:40:25.088268 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:40:25.111874 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:40:25.138775 kernel: hv_vmbus: Vmbus version:5.2 Nov 6 23:40:25.138821 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 23:40:25.157410 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 6 23:40:25.157464 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 6 23:40:25.157484 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 6 23:40:25.171094 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 6 23:40:25.194136 kernel: PTP clock support registered Nov 6 23:40:25.194193 kernel: AVX2 version of gcm_enc/dec engaged. Nov 6 23:40:25.196584 kernel: AES CTR mode by8 optimization enabled Nov 6 23:40:25.196235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:40:25.208145 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 23:40:25.196433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:40:25.202192 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:40:25.208129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:40:25.208307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:40:25.216099 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:40:25.239359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:40:25.256072 kernel: hv_vmbus: registering driver hv_netvsc Nov 6 23:40:25.265601 kernel: hv_vmbus: registering driver hv_storvsc Nov 6 23:40:25.265640 kernel: hv_utils: Registering HyperV Utility Driver Nov 6 23:40:25.265666 kernel: hv_vmbus: registering driver hv_utils Nov 6 23:40:25.266721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:40:25.291463 kernel: hv_vmbus: registering driver hid_hyperv Nov 6 23:40:25.291494 kernel: scsi host1: storvsc_host_t Nov 6 23:40:25.291698 kernel: scsi host0: storvsc_host_t Nov 6 23:40:25.291856 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 6 23:40:25.291876 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 6 23:40:25.292030 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 6 23:40:25.266810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:40:25.291413 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:40:25.308311 kernel: hv_utils: Heartbeat IC version 3.0 Nov 6 23:40:25.308332 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 6 23:40:25.308366 kernel: hv_utils: Shutdown IC version 3.2 Nov 6 23:40:25.308380 kernel: hv_utils: TimeSync IC version 4.0 Nov 6 23:40:25.884693 systemd-resolved[210]: Clock change detected. Flushing caches. Nov 6 23:40:25.892702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:40:25.930242 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 6 23:40:25.930644 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 23:40:25.930193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:40:25.938239 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 6 23:40:25.939523 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:40:25.957322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#97 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 23:40:25.967703 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 6 23:40:25.967979 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 6 23:40:25.968315 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 6 23:40:25.971322 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 6 23:40:25.976328 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 6 23:40:25.982252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:40:25.989490 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:40:25.989523 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 6 23:40:26.005319 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#65 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 23:40:26.040639 kernel: hv_netvsc 7ced8d2f-7804-7ced-8d2f-78047ced8d2f eth0: VF slot 1 added Nov 6 23:40:26.049359 kernel: hv_vmbus: registering driver hv_pci Nov 6 23:40:26.055028 kernel: hv_pci 0ecf4f56-53f2-4b34-87c2-1c41c30eec4e: PCI VMBus probing: Using version 0x10004 Nov 6 23:40:26.055238 kernel: hv_pci 0ecf4f56-53f2-4b34-87c2-1c41c30eec4e: PCI host bridge to bus 53f2:00 Nov 6 23:40:26.059316 kernel: pci_bus 53f2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 6 23:40:26.066697 kernel: pci_bus 53f2:00: No busn resource found for root bus, will use [bus 00-ff] Nov 6 23:40:26.072431 kernel: pci 53f2:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 6 23:40:26.077330 kernel: pci 53f2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 6 23:40:26.081451 kernel: pci 53f2:00:02.0: enabling Extended Tags Nov 6 23:40:26.094601 kernel: pci 53f2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 53f2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 6 23:40:26.101504 kernel: pci_bus 53f2:00: busn_res: [bus 00-ff] end is updated to 00 Nov 6 23:40:26.101796 kernel: pci 53f2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 6 23:40:26.266692 kernel: mlx5_core 53f2:00:02.0: enabling device (0000 -> 0002) Nov 6 23:40:26.271326 kernel: mlx5_core 53f2:00:02.0: firmware version: 14.30.5006 Nov 6 23:40:26.480322 kernel: hv_netvsc 7ced8d2f-7804-7ced-8d2f-78047ced8d2f eth0: VF registering: eth1 Nov 6 23:40:26.480586 kernel: mlx5_core 53f2:00:02.0 eth1: joined to eth0 Nov 6 23:40:26.487777 kernel: mlx5_core 53f2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 6 23:40:26.499334 kernel: mlx5_core 53f2:00:02.0 enP21490s1: renamed from eth1 Nov 6 23:40:26.592327 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (441) Nov 6 23:40:26.623065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 6 23:40:26.641289 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 6 23:40:26.671718 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 6 23:40:26.711332 kernel: BTRFS: device fsid 7e63b391-7474-48b8-9614-cf161680d90d devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (448) Nov 6 23:40:26.729000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 6 23:40:26.732768 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 6 23:40:26.749479 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:40:26.766342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:40:27.781368 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:40:27.784105 disk-uuid[612]: The operation has completed successfully. Nov 6 23:40:27.882289 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:40:27.882416 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:40:27.926447 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 23:40:27.935604 sh[698]: Success Nov 6 23:40:27.965350 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 6 23:40:28.256540 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:40:28.276316 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 23:40:28.281938 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 23:40:28.303317 kernel: BTRFS info (device dm-0): first mount of filesystem 7e63b391-7474-48b8-9614-cf161680d90d Nov 6 23:40:28.303367 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:40:28.309597 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 6 23:40:28.312700 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:40:28.315428 kernel: BTRFS info (device dm-0): using free space tree Nov 6 23:40:28.622358 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 23:40:28.628461 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:40:28.644457 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:40:28.648617 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:40:28.685319 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:40:28.691453 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:40:28.691513 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:40:28.715613 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:40:28.724341 kernel: BTRFS info (device sda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:40:28.727879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:40:28.735609 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:40:28.753876 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:40:28.765459 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:40:28.791011 systemd-networkd[879]: lo: Link UP Nov 6 23:40:28.791022 systemd-networkd[879]: lo: Gained carrier Nov 6 23:40:28.793242 systemd-networkd[879]: Enumeration completed Nov 6 23:40:28.793338 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:40:28.794076 systemd[1]: Reached target network.target - Network. Nov 6 23:40:28.795914 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:40:28.795918 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:40:28.863323 kernel: mlx5_core 53f2:00:02.0 enP21490s1: Link up Nov 6 23:40:28.897897 kernel: hv_netvsc 7ced8d2f-7804-7ced-8d2f-78047ced8d2f eth0: Data path switched to VF: enP21490s1 Nov 6 23:40:28.897475 systemd-networkd[879]: enP21490s1: Link UP Nov 6 23:40:28.897593 systemd-networkd[879]: eth0: Link UP Nov 6 23:40:28.897792 systemd-networkd[879]: eth0: Gained carrier Nov 6 23:40:28.897806 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:40:28.903511 systemd-networkd[879]: enP21490s1: Gained carrier Nov 6 23:40:28.940356 systemd-networkd[879]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 23:40:29.715778 ignition[860]: Ignition 2.20.0 Nov 6 23:40:29.715791 ignition[860]: Stage: fetch-offline Nov 6 23:40:29.715840 ignition[860]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:40:29.715852 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 23:40:29.715967 ignition[860]: parsed url from cmdline: "" Nov 6 23:40:29.715972 ignition[860]: no config URL provided Nov 6 23:40:29.715979 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:40:29.715989 ignition[860]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:40:29.729328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:40:29.715995 ignition[860]: failed to fetch config: resource requires networking Nov 6 23:40:29.716237 ignition[860]: Ignition finished successfully Nov 6 23:40:29.751438 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 23:40:29.768929 ignition[889]: Ignition 2.20.0 Nov 6 23:40:29.768941 ignition[889]: Stage: fetch Nov 6 23:40:29.769154 ignition[889]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:40:29.769166 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 23:40:29.769258 ignition[889]: parsed url from cmdline: "" Nov 6 23:40:29.769261 ignition[889]: no config URL provided Nov 6 23:40:29.769266 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:40:29.769272 ignition[889]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:40:29.771137 ignition[889]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 6 23:40:29.857417 ignition[889]: GET result: OK Nov 6 23:40:29.857910 ignition[889]: config has been read from IMDS userdata Nov 6 23:40:29.857934 ignition[889]: parsing config with SHA512: db4a404ddec1bf9d9d76dc8e135a001865272f0513d763ef18feb1d4b89b89519fceb17e9b11b26957e6ddf2907e7b5b94b275b7e8b5fca9caa6e06ce8471511 Nov 6 23:40:29.865612 unknown[889]: fetched base config from "system" Nov 6 23:40:29.865628 unknown[889]: fetched base config from "system" Nov 6 23:40:29.866047 ignition[889]: fetch: fetch complete Nov 6 23:40:29.865636 unknown[889]: fetched user config from "azure" Nov 6 23:40:29.866052 ignition[889]: fetch: fetch passed Nov 6 23:40:29.867806 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 23:40:29.866293 ignition[889]: Ignition finished successfully Nov 6 23:40:29.879003 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:40:29.902267 ignition[895]: Ignition 2.20.0 Nov 6 23:40:29.902278 ignition[895]: Stage: kargs Nov 6 23:40:29.905654 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:40:29.902521 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:40:29.902534 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 23:40:29.903359 ignition[895]: kargs: kargs passed Nov 6 23:40:29.903400 ignition[895]: Ignition finished successfully Nov 6 23:40:29.922421 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:40:29.934035 ignition[902]: Ignition 2.20.0 Nov 6 23:40:29.934046 ignition[902]: Stage: disks Nov 6 23:40:29.934262 ignition[902]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:40:29.934275 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 23:40:29.935251 ignition[902]: disks: disks passed Nov 6 23:40:29.935289 ignition[902]: Ignition finished successfully Nov 6 23:40:29.947429 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:40:29.951288 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:40:29.960085 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:40:29.963499 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:40:29.970135 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:40:29.973101 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:40:29.985506 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:40:30.079650 systemd-fsck[910]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 6 23:40:30.085984 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:40:30.096537 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:40:30.189527 kernel: EXT4-fs (sda9): mounted filesystem 2abcf372-764b-46c0-a870-42c779c5f871 r/w with ordered data mode. Quota mode: none. Nov 6 23:40:30.190249 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:40:30.193254 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:40:30.237440 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:40:30.260537 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (921) Nov 6 23:40:30.268022 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:40:30.268107 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:40:30.267972 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:40:30.273830 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:40:30.277564 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 23:40:30.287608 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:40:30.297841 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:40:30.287655 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:40:30.304622 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:40:30.310230 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:40:30.323466 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:40:30.930537 systemd-networkd[879]: eth0: Gained IPv6LL Nov 6 23:40:31.061115 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:40:31.072205 coreos-metadata[936]: Nov 06 23:40:31.072 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 23:40:31.079589 coreos-metadata[936]: Nov 06 23:40:31.079 INFO Fetch successful Nov 6 23:40:31.082865 coreos-metadata[936]: Nov 06 23:40:31.082 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 6 23:40:31.097973 coreos-metadata[936]: Nov 06 23:40:31.097 INFO Fetch successful Nov 6 23:40:31.100783 coreos-metadata[936]: Nov 06 23:40:31.098 INFO wrote hostname ci-4230.2.4-n-c920fca088 to /sysroot/etc/hostname Nov 6 23:40:31.105363 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:40:31.108896 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:40:31.118696 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:40:31.141428 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:40:32.131532 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:40:32.141458 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:40:32.148457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:40:32.160168 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:40:32.170278 kernel: BTRFS info (device sda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:40:32.199854 ignition[1040]: INFO : Ignition 2.20.0 Nov 6 23:40:32.199854 ignition[1040]: INFO : Stage: mount Nov 6 23:40:32.210680 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:40:32.210680 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 23:40:32.210680 ignition[1040]: INFO : mount: mount passed Nov 6 23:40:32.210680 ignition[1040]: INFO : Ignition finished successfully Nov 6 23:40:32.200533 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:40:32.205657 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:40:32.219398 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:40:32.234261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:40:32.259325 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1052) Nov 6 23:40:32.264319 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:40:32.264368 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:40:32.269452 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:40:32.277320 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:40:32.278822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:40:32.302029 ignition[1068]: INFO : Ignition 2.20.0 Nov 6 23:40:32.302029 ignition[1068]: INFO : Stage: files Nov 6 23:40:32.306586 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:40:32.306586 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 23:40:32.313372 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:40:32.331055 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:40:32.331055 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:40:32.422025 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:40:32.426531 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:40:32.430694 unknown[1068]: wrote ssh authorized keys file for user: core Nov 6 23:40:32.433603 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:40:32.463925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:40:32.469782 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 23:40:32.508751 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:40:32.546707 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:40:32.552814 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:40:32.552814 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 23:40:32.844535 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:40:33.121595 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:40:33.121595 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:40:33.132637 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 23:40:33.372340 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:40:33.636641 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 23:40:33.636641 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:40:33.666129 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:40:33.672169 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:40:33.672169 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:40:33.672169 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:40:33.685537 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:40:33.689659 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:40:33.694797 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:40:33.699870 ignition[1068]: INFO : files: files passed Nov 6 23:40:33.699870 ignition[1068]: INFO : Ignition finished successfully Nov 6 23:40:33.703652 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:40:33.715530 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:40:33.722456 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:40:33.730131 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:40:33.730260 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:40:33.751773 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:40:33.751773 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:40:33.761422 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:40:33.766964 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:40:33.767256 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:40:33.782588 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:40:33.808672 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:40:33.808794 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:40:33.815597 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:40:33.825350 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:40:33.828358 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:40:33.840571 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:40:33.855718 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:40:33.865536 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:40:33.880011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:40:33.880313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:40:33.880815 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:40:33.881271 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:40:33.881484 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:40:33.882264 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:40:33.882960 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:40:33.883406 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:40:33.883833 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:40:33.884354 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:40:33.885093 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:40:33.885618 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:40:33.886109 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:40:33.886564 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:40:33.887038 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:40:33.887466 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:40:33.887597 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:40:33.888433 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:40:33.888932 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:40:33.889348 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:40:33.935259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:40:33.938917 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:40:33.939055 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:40:34.002571 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:40:34.002798 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:40:34.009498 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:40:34.009630 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:40:34.012796 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 23:40:34.012939 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:40:34.037581 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:40:34.040331 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:40:34.040523 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:40:34.052550 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:40:34.056153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:40:34.056398 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:40:34.063910 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:40:34.064266 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:40:34.079250 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:40:34.079481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:40:34.090772 ignition[1122]: INFO : Ignition 2.20.0 Nov 6 23:40:34.090772 ignition[1122]: INFO : Stage: umount Nov 6 23:40:34.090772 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:40:34.090772 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 23:40:34.090772 ignition[1122]: INFO : umount: umount passed Nov 6 23:40:34.090772 ignition[1122]: INFO : Ignition finished successfully Nov 6 23:40:34.091147 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:40:34.091255 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:40:34.101910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:40:34.104861 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:40:34.104910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:40:34.111631 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:40:34.114048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:40:34.131592 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 23:40:34.131667 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 23:40:34.139768 systemd[1]: Stopped target network.target - Network. Nov 6 23:40:34.139880 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:40:34.139939 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:40:34.141320 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:40:34.141776 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:40:34.148107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:40:34.153973 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:40:34.156691 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:40:34.159756 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:40:34.159803 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:40:34.160231 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:40:34.160260 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:40:34.160687 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:40:34.160733 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:40:34.161148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:40:34.161181 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:40:34.165569 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:40:34.165935 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:40:34.218236 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:40:34.218362 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:40:34.228806 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 23:40:34.229061 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:40:34.229107 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:40:34.237425 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:40:34.244190 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:40:34.244314 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:40:34.251444 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 23:40:34.251628 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:40:34.251661 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:40:34.278499 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:40:34.281426 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:40:34.281499 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:40:34.291080 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:40:34.291140 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:40:34.297802 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:40:34.297854 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:40:34.303692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:40:34.313459 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:40:34.327831 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:40:34.328037 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:40:34.338543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:40:34.338614 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:40:34.347571 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:40:34.347629 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:40:34.356225 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:40:34.358756 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:40:34.364995 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:40:34.365070 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:40:34.373581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:40:34.373657 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:40:34.385312 kernel: hv_netvsc 7ced8d2f-7804-7ced-8d2f-78047ced8d2f eth0: Data path switched from VF: enP21490s1 Nov 6 23:40:34.393439 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:40:34.399741 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:40:34.399823 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:40:34.410204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:40:34.410279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:40:34.419930 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:40:34.420070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:40:34.425756 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:40:34.425849 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:40:34.431715 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:40:34.431796 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:40:34.438347 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:40:34.442651 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:40:34.442741 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:40:34.465458 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:40:34.474684 systemd[1]: Switching root. Nov 6 23:40:34.532629 systemd-journald[177]: Journal stopped Nov 6 23:40:40.187362 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Nov 6 23:40:40.187393 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:40:40.187404 kernel: SELinux: policy capability open_perms=1 Nov 6 23:40:40.187413 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:40:40.187421 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:40:40.187429 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:40:40.187440 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:40:40.187451 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:40:40.187460 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:40:40.187468 kernel: audit: type=1403 audit(1762472436.365:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:40:40.187477 systemd[1]: Successfully loaded SELinux policy in 182.066ms. Nov 6 23:40:40.187487 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.443ms. Nov 6 23:40:40.187499 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:40:40.187510 systemd[1]: Detected virtualization microsoft. Nov 6 23:40:40.187523 systemd[1]: Detected architecture x86-64. Nov 6 23:40:40.187532 systemd[1]: Detected first boot. Nov 6 23:40:40.187542 systemd[1]: Hostname set to . Nov 6 23:40:40.187554 systemd[1]: Initializing machine ID from random generator. Nov 6 23:40:40.187564 zram_generator::config[1166]: No configuration found. Nov 6 23:40:40.187578 kernel: Guest personality initialized and is inactive Nov 6 23:40:40.187589 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Nov 6 23:40:40.187598 kernel: Initialized host personality Nov 6 23:40:40.187606 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:40:40.187618 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:40:40.187628 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 23:40:40.187640 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:40:40.187650 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:40:40.187663 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:40:40.187675 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:40:40.187686 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:40:40.187696 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:40:40.187707 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:40:40.187718 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:40:40.187727 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:40:40.187742 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:40:40.187752 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:40:40.187765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:40:40.187775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:40:40.187787 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:40:40.187797 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:40:40.187813 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:40:40.187825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:40:40.187835 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 23:40:40.187847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:40:40.187860 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:40:40.187870 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:40:40.187880 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:40:40.187893 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:40:40.187907 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:40:40.187918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:40:40.187934 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:40:40.187945 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:40:40.187958 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:40:40.187968 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:40:40.187978 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:40:40.187992 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:40:40.188005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:40:40.188016 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:40:40.188028 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:40:40.188038 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:40:40.188050 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:40:40.188061 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:40:40.188072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:40.188088 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:40:40.188100 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:40:40.188111 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:40:40.188122 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:40:40.188135 systemd[1]: Reached target machines.target - Containers. Nov 6 23:40:40.188145 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:40:40.188157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:40:40.188170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:40:40.188183 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:40:40.188195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:40:40.188206 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:40:40.188219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:40:40.188229 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:40:40.188239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:40:40.188253 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:40:40.188263 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:40:40.188276 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:40:40.188289 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:40:40.188307 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:40:40.188320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:40:40.188330 kernel: fuse: init (API version 7.39) Nov 6 23:40:40.188344 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:40:40.188354 kernel: loop: module loaded Nov 6 23:40:40.188367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:40:40.188380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:40:40.188393 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:40:40.188403 kernel: ACPI: bus type drm_connector registered Nov 6 23:40:40.188417 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:40:40.188446 systemd-journald[1249]: Collecting audit messages is disabled. Nov 6 23:40:40.188476 systemd-journald[1249]: Journal started Nov 6 23:40:40.188507 systemd-journald[1249]: Runtime Journal (/run/log/journal/9341ba20c15541e1835a57d5311448dc) is 8M, max 158.8M, 150.8M free. Nov 6 23:40:39.472795 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:40:39.481154 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 23:40:39.481570 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:40:40.203224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:40:40.212387 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 23:40:40.212464 systemd[1]: Stopped verity-setup.service. Nov 6 23:40:40.223321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:40.229316 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:40:40.233195 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:40:40.236525 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:40:40.240222 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:40:40.243602 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:40:40.247230 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:40:40.250957 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:40:40.254259 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:40:40.258269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:40:40.262589 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:40:40.262769 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:40:40.266829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:40:40.267014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:40:40.270839 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:40:40.271021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:40:40.274731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:40:40.274953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:40:40.279363 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:40:40.279656 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:40:40.283540 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:40:40.283721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:40:40.287363 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:40:40.291047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:40:40.295290 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:40:40.308213 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:40:40.315472 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:40:40.331946 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:40:40.335943 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:40:40.336001 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:40:40.340705 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:40:40.351619 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:40:40.358212 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:40:40.361769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:40:40.366751 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:40:40.378457 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:40:40.383062 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:40:40.388531 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:40:40.393996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:40:40.410517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:40:40.419851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:40:40.431544 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:40:40.441223 systemd-journald[1249]: Time spent on flushing to /var/log/journal/9341ba20c15541e1835a57d5311448dc is 30.932ms for 970 entries. Nov 6 23:40:40.441223 systemd-journald[1249]: System Journal (/var/log/journal/9341ba20c15541e1835a57d5311448dc) is 8M, max 2.6G, 2.6G free. Nov 6 23:40:40.505466 systemd-journald[1249]: Received client request to flush runtime journal. Nov 6 23:40:40.445371 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:40:40.462120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:40:40.466232 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:40:40.470432 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:40:40.478367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:40:40.485056 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:40:40.499212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:40:40.511484 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:40:40.521435 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 6 23:40:40.530526 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:40:40.561242 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:40:40.562380 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:40:40.568634 udevadm[1319]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 6 23:40:40.586522 kernel: loop0: detected capacity change from 0 to 28272 Nov 6 23:40:40.588075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:40:40.675214 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:40:40.686232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:40:40.799839 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Nov 6 23:40:40.799864 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Nov 6 23:40:40.804766 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:40:41.034349 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:40:41.119336 kernel: loop1: detected capacity change from 0 to 138176 Nov 6 23:40:41.330013 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:40:41.337557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:40:41.373507 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Nov 6 23:40:41.646922 kernel: loop2: detected capacity change from 0 to 147912 Nov 6 23:40:41.640616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:40:41.665169 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:40:41.720467 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 23:40:41.792725 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:40:41.839340 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 23:40:41.889334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 23:40:41.909492 kernel: hv_vmbus: registering driver hv_balloon Nov 6 23:40:41.909569 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 6 23:40:41.911833 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:40:41.924318 kernel: hv_vmbus: registering driver hyperv_fb Nov 6 23:40:41.929317 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 6 23:40:41.929381 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 6 23:40:41.934414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:40:41.935824 kernel: Console: switching to colour dummy device 80x25 Nov 6 23:40:41.942324 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 23:40:41.958503 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:40:41.958737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:40:41.963749 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:40:41.977536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:40:42.202326 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1351) Nov 6 23:40:42.208322 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 6 23:40:42.279044 systemd-networkd[1344]: lo: Link UP Nov 6 23:40:42.279329 systemd-networkd[1344]: lo: Gained carrier Nov 6 23:40:42.296028 kernel: loop3: detected capacity change from 0 to 219144 Nov 6 23:40:42.299629 systemd-networkd[1344]: Enumeration completed Nov 6 23:40:42.300254 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:40:42.300382 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:40:42.301598 systemd-networkd[1344]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:40:42.307487 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:40:42.317494 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:40:42.371575 kernel: mlx5_core 53f2:00:02.0 enP21490s1: Link up Nov 6 23:40:42.380331 kernel: loop4: detected capacity change from 0 to 28272 Nov 6 23:40:42.394383 kernel: hv_netvsc 7ced8d2f-7804-7ced-8d2f-78047ced8d2f eth0: Data path switched to VF: enP21490s1 Nov 6 23:40:42.394475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 6 23:40:42.395658 systemd-networkd[1344]: enP21490s1: Link UP Nov 6 23:40:42.395766 systemd-networkd[1344]: eth0: Link UP Nov 6 23:40:42.395770 systemd-networkd[1344]: eth0: Gained carrier Nov 6 23:40:42.395783 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:40:42.403543 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:40:42.405958 systemd-networkd[1344]: enP21490s1: Gained carrier Nov 6 23:40:42.415465 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:40:42.425372 kernel: loop5: detected capacity change from 0 to 138176 Nov 6 23:40:42.432516 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 6 23:40:42.443860 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 6 23:40:42.453321 kernel: loop6: detected capacity change from 0 to 147912 Nov 6 23:40:42.478332 kernel: loop7: detected capacity change from 0 to 219144 Nov 6 23:40:42.493135 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:40:42.504330 (sd-merge)[1450]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 6 23:40:42.504914 (sd-merge)[1450]: Merged extensions into '/usr'. Nov 6 23:40:42.510860 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:40:42.510875 systemd[1]: Reloading... Nov 6 23:40:42.590333 zram_generator::config[1493]: No configuration found. Nov 6 23:40:42.604379 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:40:42.756873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:40:42.872780 systemd[1]: Reloading finished in 361 ms. Nov 6 23:40:42.887940 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:40:42.892723 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:40:42.897292 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 6 23:40:42.908769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:40:42.920391 systemd[1]: Starting ensure-sysext.service... Nov 6 23:40:42.924480 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 6 23:40:42.937485 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:40:42.947892 lvm[1552]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:40:42.957436 systemd[1]: Reload requested from client PID 1551 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:40:42.957452 systemd[1]: Reloading... Nov 6 23:40:42.995786 systemd-tmpfiles[1553]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:40:42.996194 systemd-tmpfiles[1553]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:40:42.998099 systemd-tmpfiles[1553]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:40:42.998647 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Nov 6 23:40:42.998819 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Nov 6 23:40:43.057433 zram_generator::config[1586]: No configuration found. Nov 6 23:40:43.091945 systemd-tmpfiles[1553]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:40:43.091966 systemd-tmpfiles[1553]: Skipping /boot Nov 6 23:40:43.110478 systemd-tmpfiles[1553]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:40:43.110492 systemd-tmpfiles[1553]: Skipping /boot Nov 6 23:40:43.205047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:40:43.318695 systemd[1]: Reloading finished in 360 ms. Nov 6 23:40:43.345151 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 6 23:40:43.349455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:40:43.364580 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:40:43.388612 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:40:43.394872 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:40:43.413630 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:40:43.418456 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:40:43.427779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:43.428047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:40:43.433393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:40:43.438602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:40:43.444578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:40:43.448076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:40:43.448357 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:40:43.448557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:43.453074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:40:43.453316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:40:43.461706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:40:43.461929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:40:43.467077 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:40:43.467286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:40:43.478420 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:40:43.478829 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:40:43.480700 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:40:43.490114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:43.490423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:40:43.496549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:40:43.508570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:40:43.523575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:40:43.527755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:40:43.528375 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:40:43.528525 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:43.536123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:40:43.536564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:40:43.541053 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:40:43.541372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:40:43.545517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:40:43.545726 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:40:43.552964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:40:43.553199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:40:43.557262 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:43.558026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:40:43.563566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:40:43.570629 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:40:43.578599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:40:43.581244 systemd-resolved[1651]: Positive Trust Anchors: Nov 6 23:40:43.581263 systemd-resolved[1651]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:40:43.581311 systemd-resolved[1651]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:40:43.585697 systemd-networkd[1344]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 23:40:43.589651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:40:43.593172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:40:43.593371 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:40:43.593621 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:40:43.597675 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:40:43.602128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:40:43.602367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:40:43.606898 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:40:43.607128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:40:43.611017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:40:43.611239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:40:43.616038 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:40:43.616264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:40:43.618293 systemd-resolved[1651]: Using system hostname 'ci-4230.2.4-n-c920fca088'. Nov 6 23:40:43.620135 augenrules[1689]: No rules Nov 6 23:40:43.624756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:40:43.628752 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:40:43.628964 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:40:43.642089 systemd[1]: Reached target network.target - Network. Nov 6 23:40:43.645268 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:40:43.649028 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:40:43.649084 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:40:43.649494 systemd[1]: Finished ensure-sysext.service. Nov 6 23:40:43.675289 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:40:43.986565 systemd-networkd[1344]: eth0: Gained IPv6LL Nov 6 23:40:43.989206 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:40:43.993534 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:40:44.205448 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:40:44.209890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:40:47.691010 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:40:47.700296 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:40:47.711433 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:40:47.720670 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:40:47.724258 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:40:47.727637 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:40:47.731467 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:40:47.735477 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:40:47.738860 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:40:47.742562 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:40:47.746112 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:40:47.746174 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:40:47.749033 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:40:47.773371 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:40:47.778479 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:40:47.784144 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:40:47.788255 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:40:47.792263 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:40:47.801983 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:40:47.805587 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:40:47.809798 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:40:47.812996 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:40:47.815831 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:40:47.818539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:40:47.818567 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:40:47.824475 systemd[1]: Starting chronyd.service - NTP client/server... Nov 6 23:40:47.831423 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:40:47.841560 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 23:40:47.848505 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:40:47.860397 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:40:47.865989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:40:47.869676 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:40:47.869741 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 6 23:40:47.871639 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 6 23:40:47.875420 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 6 23:40:47.878355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:40:47.886858 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:40:47.894493 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:40:47.899612 KVP[1717]: KVP starting; pid is:1717 Nov 6 23:40:47.904720 (chronyd)[1708]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 6 23:40:47.907453 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:40:47.916503 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:40:47.920754 jq[1715]: false Nov 6 23:40:47.922832 chronyd[1725]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 6 23:40:47.928023 KVP[1717]: KVP LIC Version: 3.1 Nov 6 23:40:47.928321 kernel: hv_utils: KVP IC version 4.0 Nov 6 23:40:47.932434 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:40:47.941486 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:40:47.945933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:40:47.946588 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:40:47.951265 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:40:47.956514 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:40:47.973787 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:40:47.974291 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:40:47.986088 jq[1733]: true Nov 6 23:40:47.986263 chronyd[1725]: Timezone right/UTC failed leap second check, ignoring Nov 6 23:40:47.986488 chronyd[1725]: Loaded seccomp filter (level 2) Nov 6 23:40:47.990628 systemd[1]: Started chronyd.service - NTP client/server. Nov 6 23:40:47.994191 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:40:47.995532 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:40:48.002143 extend-filesystems[1716]: Found loop4 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found loop5 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found loop6 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found loop7 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda1 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda2 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda3 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found usr Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda4 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda6 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda7 Nov 6 23:40:48.007524 extend-filesystems[1716]: Found sda9 Nov 6 23:40:48.007524 extend-filesystems[1716]: Checking size of /dev/sda9 Nov 6 23:40:48.039910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:40:48.040083 (ntainerd)[1753]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:40:48.040578 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:40:48.095138 jq[1744]: true Nov 6 23:40:48.120020 extend-filesystems[1716]: Old size kept for /dev/sda9 Nov 6 23:40:48.120020 extend-filesystems[1716]: Found sr0 Nov 6 23:40:48.116519 dbus-daemon[1711]: [system] SELinux support is enabled Nov 6 23:40:48.112868 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:40:48.153930 update_engine[1731]: I20251106 23:40:48.141527 1731 main.cc:92] Flatcar Update Engine starting Nov 6 23:40:48.114873 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:40:48.123262 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:40:48.141864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:40:48.141905 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:40:48.146365 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:40:48.146386 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:40:48.176089 tar[1741]: linux-amd64/LICENSE Nov 6 23:40:48.176089 tar[1741]: linux-amd64/helm Nov 6 23:40:48.176899 update_engine[1731]: I20251106 23:40:48.175801 1731 update_check_scheduler.cc:74] Next update check in 7m12s Nov 6 23:40:48.170382 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:40:48.185463 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:40:48.199058 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:40:48.230357 systemd-logind[1729]: New seat seat0. Nov 6 23:40:48.234495 systemd-logind[1729]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 23:40:48.234706 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:40:48.311469 coreos-metadata[1710]: Nov 06 23:40:48.307 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 23:40:48.314671 coreos-metadata[1710]: Nov 06 23:40:48.314 INFO Fetch successful Nov 6 23:40:48.314671 coreos-metadata[1710]: Nov 06 23:40:48.314 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 6 23:40:48.318272 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1801) Nov 6 23:40:48.319432 coreos-metadata[1710]: Nov 06 23:40:48.319 INFO Fetch successful Nov 6 23:40:48.323343 coreos-metadata[1710]: Nov 06 23:40:48.322 INFO Fetching http://168.63.129.16/machine/8810e6d6-df46-4a42-8b64-6bccb737a34e/4e9343d2%2D8496%2D46c0%2Da725%2D47da65ba8468.%5Fci%2D4230.2.4%2Dn%2Dc920fca088?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 6 23:40:48.326139 coreos-metadata[1710]: Nov 06 23:40:48.325 INFO Fetch successful Nov 6 23:40:48.326139 coreos-metadata[1710]: Nov 06 23:40:48.326 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 6 23:40:48.339966 coreos-metadata[1710]: Nov 06 23:40:48.339 INFO Fetch successful Nov 6 23:40:48.414338 bash[1785]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:40:48.422939 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:40:48.464559 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 23:40:48.471433 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 23:40:48.490193 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:40:48.692146 locksmithd[1784]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:40:49.219358 sshd_keygen[1734]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:40:49.261265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:40:49.275400 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:40:49.284922 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 6 23:40:49.303121 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:40:49.304345 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:40:49.312509 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 6 23:40:49.326552 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:40:49.360833 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:40:49.370500 tar[1741]: linux-amd64/README.md Nov 6 23:40:49.374710 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:40:49.380761 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 23:40:49.385633 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:40:49.405504 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:40:49.710924 containerd[1753]: time="2025-11-06T23:40:49.710807000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 6 23:40:49.745970 containerd[1753]: time="2025-11-06T23:40:49.745049200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747370400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747410900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747432000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747592300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747616500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747696000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747712900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747956800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747976800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.747994800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748449 containerd[1753]: time="2025-11-06T23:40:49.748009000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748869 containerd[1753]: time="2025-11-06T23:40:49.748097600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748869 containerd[1753]: time="2025-11-06T23:40:49.748336000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748869 containerd[1753]: time="2025-11-06T23:40:49.748519400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:40:49.748869 containerd[1753]: time="2025-11-06T23:40:49.748538700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 6 23:40:49.748869 containerd[1753]: time="2025-11-06T23:40:49.748647700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 6 23:40:49.748869 containerd[1753]: time="2025-11-06T23:40:49.748707700Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:40:49.761057 containerd[1753]: time="2025-11-06T23:40:49.761024200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 6 23:40:49.761139 containerd[1753]: time="2025-11-06T23:40:49.761081400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 6 23:40:49.761139 containerd[1753]: time="2025-11-06T23:40:49.761104100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 6 23:40:49.761139 containerd[1753]: time="2025-11-06T23:40:49.761123700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 6 23:40:49.761256 containerd[1753]: time="2025-11-06T23:40:49.761140500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 6 23:40:49.761314 containerd[1753]: time="2025-11-06T23:40:49.761282500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 6 23:40:49.761845 containerd[1753]: time="2025-11-06T23:40:49.761777500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 6 23:40:49.762000 containerd[1753]: time="2025-11-06T23:40:49.761962800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 6 23:40:49.762058 containerd[1753]: time="2025-11-06T23:40:49.761998200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 6 23:40:49.762058 containerd[1753]: time="2025-11-06T23:40:49.762018700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 6 23:40:49.762058 containerd[1753]: time="2025-11-06T23:40:49.762038500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762184 containerd[1753]: time="2025-11-06T23:40:49.762069600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762184 containerd[1753]: time="2025-11-06T23:40:49.762088700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762184 containerd[1753]: time="2025-11-06T23:40:49.762113000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762184 containerd[1753]: time="2025-11-06T23:40:49.762146700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762184 containerd[1753]: time="2025-11-06T23:40:49.762166100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762184 containerd[1753]: time="2025-11-06T23:40:49.762182800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762389 containerd[1753]: time="2025-11-06T23:40:49.762199000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 6 23:40:49.762389 containerd[1753]: time="2025-11-06T23:40:49.762239200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762389 containerd[1753]: time="2025-11-06T23:40:49.762259100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762389 containerd[1753]: time="2025-11-06T23:40:49.762276900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762389 containerd[1753]: time="2025-11-06T23:40:49.762317500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762389 containerd[1753]: time="2025-11-06T23:40:49.762335600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762389 containerd[1753]: time="2025-11-06T23:40:49.762362400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762395600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762414600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762434000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762454500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762487200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762516200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762534500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762568100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762597800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.762639 containerd[1753]: time="2025-11-06T23:40:49.762636600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762654700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762732300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762755600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762771500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762852900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762926900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762946300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762959500Z" level=info msg="NRI interface is disabled by configuration." Nov 6 23:40:49.765427 containerd[1753]: time="2025-11-06T23:40:49.762972900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.763443900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.763523000Z" level=info msg="Connect containerd service" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.763582000Z" level=info msg="using legacy CRI server" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.763595600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.763764700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.764876600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.764945000Z" level=info msg="Start subscribing containerd event" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.764993700Z" level=info msg="Start recovering state" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.765065900Z" level=info msg="Start event monitor" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.765094900Z" level=info msg="Start snapshots syncer" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.765107100Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.765118700Z" level=info msg="Start streaming server" Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.765654300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:40:49.765737 containerd[1753]: time="2025-11-06T23:40:49.765710800Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:40:49.765883 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:40:49.770902 containerd[1753]: time="2025-11-06T23:40:49.769353700Z" level=info msg="containerd successfully booted in 0.059896s" Nov 6 23:40:49.897959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:40:49.902290 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:40:49.906123 systemd[1]: Startup finished in 1.092s (kernel) + 11.892s (initrd) + 13.720s (userspace) = 26.705s. Nov 6 23:40:49.940715 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:40:50.383595 login[1885]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 23:40:50.388059 login[1886]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 23:40:50.400233 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:40:50.406138 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:40:50.419405 systemd-logind[1729]: New session 2 of user core. Nov 6 23:40:50.428882 systemd-logind[1729]: New session 1 of user core. Nov 6 23:40:50.436738 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:40:50.443526 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:40:50.472046 (systemd)[1909]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:40:50.474828 systemd-logind[1729]: New session c1 of user core. Nov 6 23:40:50.625917 kubelet[1898]: E1106 23:40:50.625859 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:40:50.629083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:40:50.629290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:40:50.629796 systemd[1]: kubelet.service: Consumed 936ms CPU time, 260.4M memory peak. Nov 6 23:40:50.712423 systemd[1909]: Queued start job for default target default.target. Nov 6 23:40:50.721643 systemd[1909]: Created slice app.slice - User Application Slice. Nov 6 23:40:50.721682 systemd[1909]: Reached target paths.target - Paths. Nov 6 23:40:50.721735 systemd[1909]: Reached target timers.target - Timers. Nov 6 23:40:50.725456 systemd[1909]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:40:50.734997 systemd[1909]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:40:50.735088 systemd[1909]: Reached target sockets.target - Sockets. Nov 6 23:40:50.735194 systemd[1909]: Reached target basic.target - Basic System. Nov 6 23:40:50.735247 systemd[1909]: Reached target default.target - Main User Target. Nov 6 23:40:50.735282 systemd[1909]: Startup finished in 252ms. Nov 6 23:40:50.735586 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:40:50.741482 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:40:50.742509 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:40:51.524180 waagent[1881]: 2025-11-06T23:40:51.524075Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.524655Z INFO Daemon Daemon OS: flatcar 4230.2.4 Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.524780Z INFO Daemon Daemon Python: 3.11.11 Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.526494Z INFO Daemon Daemon Run daemon Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.527012Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.4' Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.527864Z INFO Daemon Daemon Using waagent for provisioning Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.528936Z INFO Daemon Daemon Activate resource disk Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.529328Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.534452Z INFO Daemon Daemon Found device: None Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.535475Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.535959Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.537254Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 23:40:51.563853 waagent[1881]: 2025-11-06T23:40:51.537452Z INFO Daemon Daemon Running default provisioning handler Nov 6 23:40:51.570868 waagent[1881]: 2025-11-06T23:40:51.570747Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 6 23:40:51.578342 waagent[1881]: 2025-11-06T23:40:51.578255Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 6 23:40:51.589261 waagent[1881]: 2025-11-06T23:40:51.578521Z INFO Daemon Daemon cloud-init is enabled: False Nov 6 23:40:51.589261 waagent[1881]: 2025-11-06T23:40:51.579108Z INFO Daemon Daemon Copying ovf-env.xml Nov 6 23:40:51.667431 waagent[1881]: 2025-11-06T23:40:51.664659Z INFO Daemon Daemon Successfully mounted dvd Nov 6 23:40:51.705478 waagent[1881]: 2025-11-06T23:40:51.705380Z INFO Daemon Daemon Detect protocol endpoint Nov 6 23:40:51.705512 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 6 23:40:51.709140 waagent[1881]: 2025-11-06T23:40:51.708962Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 23:40:51.723275 waagent[1881]: 2025-11-06T23:40:51.709247Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 6 23:40:51.723275 waagent[1881]: 2025-11-06T23:40:51.710192Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 6 23:40:51.723275 waagent[1881]: 2025-11-06T23:40:51.711336Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 6 23:40:51.723275 waagent[1881]: 2025-11-06T23:40:51.711703Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 6 23:40:51.751514 waagent[1881]: 2025-11-06T23:40:51.751446Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 6 23:40:51.760505 waagent[1881]: 2025-11-06T23:40:51.752022Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 6 23:40:51.760505 waagent[1881]: 2025-11-06T23:40:51.753015Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 6 23:40:51.915764 waagent[1881]: 2025-11-06T23:40:51.915654Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 6 23:40:51.919665 waagent[1881]: 2025-11-06T23:40:51.919592Z INFO Daemon Daemon Forcing an update of the goal state. Nov 6 23:40:51.926254 waagent[1881]: 2025-11-06T23:40:51.926193Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 23:40:51.944917 waagent[1881]: 2025-11-06T23:40:51.944850Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 6 23:40:51.962643 waagent[1881]: 2025-11-06T23:40:51.945666Z INFO Daemon Nov 6 23:40:51.962643 waagent[1881]: 2025-11-06T23:40:51.945795Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 54c0fd01-817f-4768-a506-f702b0725d0f eTag: 4246730570347869454 source: Fabric] Nov 6 23:40:51.962643 waagent[1881]: 2025-11-06T23:40:51.947100Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 6 23:40:51.962643 waagent[1881]: 2025-11-06T23:40:51.948348Z INFO Daemon Nov 6 23:40:51.962643 waagent[1881]: 2025-11-06T23:40:51.948865Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 6 23:40:51.965509 waagent[1881]: 2025-11-06T23:40:51.965456Z INFO Daemon Daemon Downloading artifacts profile blob Nov 6 23:40:52.028809 waagent[1881]: 2025-11-06T23:40:52.028716Z INFO Daemon Downloaded certificate {'thumbprint': '7769FC494F6A36752FC6C468CE3753292F9E4AD9', 'hasPrivateKey': True} Nov 6 23:40:52.034653 waagent[1881]: 2025-11-06T23:40:52.034580Z INFO Daemon Fetch goal state completed Nov 6 23:40:52.042592 waagent[1881]: 2025-11-06T23:40:52.042539Z INFO Daemon Daemon Starting provisioning Nov 6 23:40:52.050782 waagent[1881]: 2025-11-06T23:40:52.042821Z INFO Daemon Daemon Handle ovf-env.xml. Nov 6 23:40:52.050782 waagent[1881]: 2025-11-06T23:40:52.044046Z INFO Daemon Daemon Set hostname [ci-4230.2.4-n-c920fca088] Nov 6 23:40:52.062433 waagent[1881]: 2025-11-06T23:40:52.062337Z INFO Daemon Daemon Publish hostname [ci-4230.2.4-n-c920fca088] Nov 6 23:40:52.071737 waagent[1881]: 2025-11-06T23:40:52.062863Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 6 23:40:52.071737 waagent[1881]: 2025-11-06T23:40:52.063988Z INFO Daemon Daemon Primary interface is [eth0] Nov 6 23:40:52.073506 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:40:52.073517 systemd-networkd[1344]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:40:52.073565 systemd-networkd[1344]: eth0: DHCP lease lost Nov 6 23:40:52.074765 waagent[1881]: 2025-11-06T23:40:52.074667Z INFO Daemon Daemon Create user account if not exists Nov 6 23:40:52.094080 waagent[1881]: 2025-11-06T23:40:52.077950Z INFO Daemon Daemon User core already exists, skip useradd Nov 6 23:40:52.094080 waagent[1881]: 2025-11-06T23:40:52.078794Z INFO Daemon Daemon Configure sudoer Nov 6 23:40:52.094080 waagent[1881]: 2025-11-06T23:40:52.079604Z INFO Daemon Daemon Configure sshd Nov 6 23:40:52.094080 waagent[1881]: 2025-11-06T23:40:52.079982Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 6 23:40:52.094080 waagent[1881]: 2025-11-06T23:40:52.080768Z INFO Daemon Daemon Deploy ssh public key. Nov 6 23:40:52.131375 systemd-networkd[1344]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 23:41:00.790841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:41:00.798532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:41:00.905371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:41:00.909779 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:41:01.558805 kubelet[1972]: E1106 23:41:01.558727 1972 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:41:01.562417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:41:01.562620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:41:01.563160 systemd[1]: kubelet.service: Consumed 145ms CPU time, 112.3M memory peak. Nov 6 23:41:11.777841 chronyd[1725]: Selected source PHC0 Nov 6 23:41:11.790842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 23:41:11.796519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:41:11.899945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:41:11.904090 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:41:11.943866 kubelet[1987]: E1106 23:41:11.943807 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:41:11.946507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:41:11.946700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:41:11.947026 systemd[1]: kubelet.service: Consumed 139ms CPU time, 109.8M memory peak. Nov 6 23:41:22.040952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 23:41:22.046520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:41:22.148881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:41:22.153437 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:41:22.169859 waagent[1881]: 2025-11-06T23:41:22.169790Z INFO Daemon Daemon Provisioning complete Nov 6 23:41:22.184182 waagent[1881]: 2025-11-06T23:41:22.184114Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 6 23:41:22.187909 waagent[1881]: 2025-11-06T23:41:22.187836Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 6 23:41:22.193292 waagent[1881]: 2025-11-06T23:41:22.193229Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 6 23:41:22.317986 waagent[2008]: 2025-11-06T23:41:22.317832Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 6 23:41:22.318389 waagent[2008]: 2025-11-06T23:41:22.317987Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.4 Nov 6 23:41:22.318389 waagent[2008]: 2025-11-06T23:41:22.318073Z INFO ExtHandler ExtHandler Python: 3.11.11 Nov 6 23:41:22.805311 kubelet[2002]: E1106 23:41:22.805244 2002 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:41:22.807654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:41:22.807872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:41:22.808255 systemd[1]: kubelet.service: Consumed 146ms CPU time, 109.8M memory peak. Nov 6 23:41:22.898118 waagent[2008]: 2025-11-06T23:41:22.898019Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 6 23:41:22.898392 waagent[2008]: 2025-11-06T23:41:22.898332Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 23:41:22.898497 waagent[2008]: 2025-11-06T23:41:22.898455Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 23:41:22.905804 waagent[2008]: 2025-11-06T23:41:22.905742Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 23:41:22.914277 waagent[2008]: 2025-11-06T23:41:22.914221Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 6 23:41:22.914737 waagent[2008]: 2025-11-06T23:41:22.914681Z INFO ExtHandler Nov 6 23:41:22.914831 waagent[2008]: 2025-11-06T23:41:22.914773Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 24638ae4-2c0a-4634-96d1-36ae14202de8 eTag: 4246730570347869454 source: Fabric] Nov 6 23:41:22.915138 waagent[2008]: 2025-11-06T23:41:22.915086Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 6 23:41:22.915731 waagent[2008]: 2025-11-06T23:41:22.915673Z INFO ExtHandler Nov 6 23:41:22.915807 waagent[2008]: 2025-11-06T23:41:22.915759Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 6 23:41:22.919933 waagent[2008]: 2025-11-06T23:41:22.919883Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 6 23:41:22.979879 waagent[2008]: 2025-11-06T23:41:22.979804Z INFO ExtHandler Downloaded certificate {'thumbprint': '7769FC494F6A36752FC6C468CE3753292F9E4AD9', 'hasPrivateKey': True} Nov 6 23:41:22.980372 waagent[2008]: 2025-11-06T23:41:22.980317Z INFO ExtHandler Fetch goal state completed Nov 6 23:41:22.993341 waagent[2008]: 2025-11-06T23:41:22.993268Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2008 Nov 6 23:41:22.993496 waagent[2008]: 2025-11-06T23:41:22.993447Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 6 23:41:22.995032 waagent[2008]: 2025-11-06T23:41:22.994979Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.4', '', 'Flatcar Container Linux by Kinvolk'] Nov 6 23:41:22.995415 waagent[2008]: 2025-11-06T23:41:22.995365Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 6 23:41:23.055175 waagent[2008]: 2025-11-06T23:41:23.055121Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 6 23:41:23.055501 waagent[2008]: 2025-11-06T23:41:23.055405Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 6 23:41:23.062184 waagent[2008]: 2025-11-06T23:41:23.062019Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 6 23:41:23.069380 systemd[1]: Reload requested from client PID 2022 ('systemctl') (unit waagent.service)... Nov 6 23:41:23.069397 systemd[1]: Reloading... Nov 6 23:41:23.170357 zram_generator::config[2061]: No configuration found. Nov 6 23:41:23.304748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:41:23.416944 systemd[1]: Reloading finished in 347 ms. Nov 6 23:41:23.433740 waagent[2008]: 2025-11-06T23:41:23.433286Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 6 23:41:23.442803 systemd[1]: Reload requested from client PID 2120 ('systemctl') (unit waagent.service)... Nov 6 23:41:23.442929 systemd[1]: Reloading... Nov 6 23:41:23.537329 zram_generator::config[2155]: No configuration found. Nov 6 23:41:23.670162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:41:23.780386 systemd[1]: Reloading finished in 336 ms. Nov 6 23:41:23.795906 waagent[2008]: 2025-11-06T23:41:23.795749Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 6 23:41:23.796030 waagent[2008]: 2025-11-06T23:41:23.795953Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 6 23:41:24.228935 waagent[2008]: 2025-11-06T23:41:24.228842Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 6 23:41:24.229527 waagent[2008]: 2025-11-06T23:41:24.229459Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 6 23:41:24.230278 waagent[2008]: 2025-11-06T23:41:24.230218Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 6 23:41:24.230765 waagent[2008]: 2025-11-06T23:41:24.230687Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 6 23:41:24.230866 waagent[2008]: 2025-11-06T23:41:24.230823Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 23:41:24.231010 waagent[2008]: 2025-11-06T23:41:24.230916Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 23:41:24.231286 waagent[2008]: 2025-11-06T23:41:24.231221Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 6 23:41:24.231527 waagent[2008]: 2025-11-06T23:41:24.231446Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 23:41:24.231643 waagent[2008]: 2025-11-06T23:41:24.231584Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 23:41:24.231908 waagent[2008]: 2025-11-06T23:41:24.231839Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 6 23:41:24.232094 waagent[2008]: 2025-11-06T23:41:24.232043Z INFO EnvHandler ExtHandler Configure routes Nov 6 23:41:24.232289 waagent[2008]: 2025-11-06T23:41:24.232230Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 6 23:41:24.232691 waagent[2008]: 2025-11-06T23:41:24.232546Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 6 23:41:24.232691 waagent[2008]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 6 23:41:24.232691 waagent[2008]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 6 23:41:24.232691 waagent[2008]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 6 23:41:24.232691 waagent[2008]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 6 23:41:24.232691 waagent[2008]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 23:41:24.232691 waagent[2008]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 23:41:24.233012 waagent[2008]: 2025-11-06T23:41:24.232961Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 6 23:41:24.233382 waagent[2008]: 2025-11-06T23:41:24.233281Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 6 23:41:24.233546 waagent[2008]: 2025-11-06T23:41:24.233507Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 6 23:41:24.233604 waagent[2008]: 2025-11-06T23:41:24.233557Z INFO EnvHandler ExtHandler Gateway:None Nov 6 23:41:24.233722 waagent[2008]: 2025-11-06T23:41:24.233646Z INFO EnvHandler ExtHandler Routes:None Nov 6 23:41:24.240799 waagent[2008]: 2025-11-06T23:41:24.240738Z INFO ExtHandler ExtHandler Nov 6 23:41:24.241157 waagent[2008]: 2025-11-06T23:41:24.241112Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3435ca35-ecaf-43d0-82c5-d541320ff81b correlation 25b12262-501a-4e4e-bf7c-9de27752a0d3 created: 2025-11-06T23:39:54.920662Z] Nov 6 23:41:24.241530 waagent[2008]: 2025-11-06T23:41:24.241482Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 6 23:41:24.242022 waagent[2008]: 2025-11-06T23:41:24.241977Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 6 23:41:24.275254 waagent[2008]: 2025-11-06T23:41:24.275170Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8BF35FBD-6F40-4597-BF67-A662E116F41B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 6 23:41:24.343713 waagent[2008]: 2025-11-06T23:41:24.343635Z INFO MonitorHandler ExtHandler Network interfaces: Nov 6 23:41:24.343713 waagent[2008]: Executing ['ip', '-a', '-o', 'link']: Nov 6 23:41:24.343713 waagent[2008]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 6 23:41:24.343713 waagent[2008]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2f:78:04 brd ff:ff:ff:ff:ff:ff Nov 6 23:41:24.343713 waagent[2008]: 3: enP21490s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2f:78:04 brd ff:ff:ff:ff:ff:ff\ altname enP21490p0s2 Nov 6 23:41:24.343713 waagent[2008]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 6 23:41:24.343713 waagent[2008]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 6 23:41:24.343713 waagent[2008]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 6 23:41:24.343713 waagent[2008]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 6 23:41:24.343713 waagent[2008]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 6 23:41:24.343713 waagent[2008]: 2: eth0 inet6 fe80::7eed:8dff:fe2f:7804/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 6 23:41:24.376458 waagent[2008]: 2025-11-06T23:41:24.376252Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 6 23:41:24.376458 waagent[2008]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 23:41:24.376458 waagent[2008]: pkts bytes target prot opt in out source destination Nov 6 23:41:24.376458 waagent[2008]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 23:41:24.376458 waagent[2008]: pkts bytes target prot opt in out source destination Nov 6 23:41:24.376458 waagent[2008]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 23:41:24.376458 waagent[2008]: pkts bytes target prot opt in out source destination Nov 6 23:41:24.376458 waagent[2008]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 23:41:24.376458 waagent[2008]: 4 415 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 23:41:24.376458 waagent[2008]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 23:41:24.381116 waagent[2008]: 2025-11-06T23:41:24.381059Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 6 23:41:24.381116 waagent[2008]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 23:41:24.381116 waagent[2008]: pkts bytes target prot opt in out source destination Nov 6 23:41:24.381116 waagent[2008]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 23:41:24.381116 waagent[2008]: pkts bytes target prot opt in out source destination Nov 6 23:41:24.381116 waagent[2008]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 23:41:24.381116 waagent[2008]: pkts bytes target prot opt in out source destination Nov 6 23:41:24.381116 waagent[2008]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 23:41:24.381116 waagent[2008]: 14 1460 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 23:41:24.381116 waagent[2008]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 23:41:24.381568 waagent[2008]: 2025-11-06T23:41:24.381379Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 6 23:41:30.047727 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 6 23:41:33.040887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 23:41:33.047525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:41:33.172203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:41:33.176552 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:41:33.215067 kubelet[2255]: E1106 23:41:33.214992 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:41:33.217331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:41:33.217548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:41:33.217964 systemd[1]: kubelet.service: Consumed 139ms CPU time, 114.1M memory peak. Nov 6 23:41:33.227393 update_engine[1731]: I20251106 23:41:33.227342 1731 update_attempter.cc:509] Updating boot flags... Nov 6 23:41:33.938381 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2277) Nov 6 23:41:41.219712 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:41:41.230577 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:49104.service - OpenSSH per-connection server daemon (10.200.16.10:49104). Nov 6 23:41:41.945757 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 49104 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:41:41.947170 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:41:41.951462 systemd-logind[1729]: New session 3 of user core. Nov 6 23:41:41.961452 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:41:42.500629 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:49116.service - OpenSSH per-connection server daemon (10.200.16.10:49116). Nov 6 23:41:43.124409 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 49116 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:41:43.125821 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:41:43.130831 systemd-logind[1729]: New session 4 of user core. Nov 6 23:41:43.141502 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:41:43.290805 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 6 23:41:43.297601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:41:43.403435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:41:43.407783 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:41:43.444858 kubelet[2342]: E1106 23:41:43.444804 2342 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:41:43.447260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:41:43.447495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:41:43.447898 systemd[1]: kubelet.service: Consumed 141ms CPU time, 112.2M memory peak. Nov 6 23:41:43.567553 sshd[2333]: Connection closed by 10.200.16.10 port 49116 Nov 6 23:41:43.568268 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Nov 6 23:41:43.571962 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:49116.service: Deactivated successfully. Nov 6 23:41:43.573817 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:41:43.574549 systemd-logind[1729]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:41:43.575706 systemd-logind[1729]: Removed session 4. Nov 6 23:41:43.683593 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:49130.service - OpenSSH per-connection server daemon (10.200.16.10:49130). Nov 6 23:41:44.310279 sshd[2354]: Accepted publickey for core from 10.200.16.10 port 49130 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:41:44.311663 sshd-session[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:41:44.317052 systemd-logind[1729]: New session 5 of user core. Nov 6 23:41:44.323470 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:41:44.757077 sshd[2356]: Connection closed by 10.200.16.10 port 49130 Nov 6 23:41:44.758072 sshd-session[2354]: pam_unix(sshd:session): session closed for user core Nov 6 23:41:44.761671 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:49130.service: Deactivated successfully. Nov 6 23:41:44.763599 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:41:44.764348 systemd-logind[1729]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:41:44.765242 systemd-logind[1729]: Removed session 5. Nov 6 23:41:44.872600 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:49142.service - OpenSSH per-connection server daemon (10.200.16.10:49142). Nov 6 23:41:45.498418 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 49142 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:41:45.499798 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:41:45.505193 systemd-logind[1729]: New session 6 of user core. Nov 6 23:41:45.510478 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:41:45.943419 sshd[2364]: Connection closed by 10.200.16.10 port 49142 Nov 6 23:41:45.944284 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Nov 6 23:41:45.947271 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:49142.service: Deactivated successfully. Nov 6 23:41:45.949573 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:41:45.951095 systemd-logind[1729]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:41:45.952442 systemd-logind[1729]: Removed session 6. Nov 6 23:41:46.058625 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:49146.service - OpenSSH per-connection server daemon (10.200.16.10:49146). Nov 6 23:41:46.683136 sshd[2370]: Accepted publickey for core from 10.200.16.10 port 49146 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:41:46.684542 sshd-session[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:41:46.689938 systemd-logind[1729]: New session 7 of user core. Nov 6 23:41:46.695480 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:41:47.202148 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:41:47.202544 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:41:47.217742 sudo[2373]: pam_unix(sudo:session): session closed for user root Nov 6 23:41:47.319763 sshd[2372]: Connection closed by 10.200.16.10 port 49146 Nov 6 23:41:47.320790 sshd-session[2370]: pam_unix(sshd:session): session closed for user core Nov 6 23:41:47.323973 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:49146.service: Deactivated successfully. Nov 6 23:41:47.326039 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:41:47.327523 systemd-logind[1729]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:41:47.328657 systemd-logind[1729]: Removed session 7. Nov 6 23:41:47.434917 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:49148.service - OpenSSH per-connection server daemon (10.200.16.10:49148). Nov 6 23:41:48.057731 sshd[2379]: Accepted publickey for core from 10.200.16.10 port 49148 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:41:48.059186 sshd-session[2379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:41:48.064736 systemd-logind[1729]: New session 8 of user core. Nov 6 23:41:48.070480 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:41:48.401277 sudo[2383]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:41:48.401662 sudo[2383]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:41:48.405090 sudo[2383]: pam_unix(sudo:session): session closed for user root Nov 6 23:41:48.410337 sudo[2382]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:41:48.410680 sudo[2382]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:41:48.429724 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:41:48.456278 augenrules[2405]: No rules Nov 6 23:41:48.457748 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:41:48.458008 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:41:48.459116 sudo[2382]: pam_unix(sudo:session): session closed for user root Nov 6 23:41:48.560097 sshd[2381]: Connection closed by 10.200.16.10 port 49148 Nov 6 23:41:48.560791 sshd-session[2379]: pam_unix(sshd:session): session closed for user core Nov 6 23:41:48.563681 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:49148.service: Deactivated successfully. Nov 6 23:41:48.565710 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:41:48.567095 systemd-logind[1729]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:41:48.568052 systemd-logind[1729]: Removed session 8. Nov 6 23:41:48.676599 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:49150.service - OpenSSH per-connection server daemon (10.200.16.10:49150). Nov 6 23:41:49.303009 sshd[2414]: Accepted publickey for core from 10.200.16.10 port 49150 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:41:49.304399 sshd-session[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:41:49.309362 systemd-logind[1729]: New session 9 of user core. Nov 6 23:41:49.318467 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:41:49.649213 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:41:49.649615 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:41:51.380652 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:41:51.380800 (dockerd)[2435]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:41:53.540877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 6 23:41:53.548574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:41:54.185580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:41:54.190138 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:41:54.312807 kubelet[2448]: E1106 23:41:54.312753 2448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:41:54.315123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:41:54.315354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:41:54.315865 systemd[1]: kubelet.service: Consumed 148ms CPU time, 110.3M memory peak. Nov 6 23:41:54.931873 dockerd[2435]: time="2025-11-06T23:41:54.931806576Z" level=info msg="Starting up" Nov 6 23:41:55.419029 dockerd[2435]: time="2025-11-06T23:41:55.418977086Z" level=info msg="Loading containers: start." Nov 6 23:41:55.647334 kernel: Initializing XFRM netlink socket Nov 6 23:41:55.774195 systemd-networkd[1344]: docker0: Link UP Nov 6 23:41:55.812703 dockerd[2435]: time="2025-11-06T23:41:55.812656526Z" level=info msg="Loading containers: done." Nov 6 23:41:55.828484 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck944619815-merged.mount: Deactivated successfully. Nov 6 23:41:55.835163 dockerd[2435]: time="2025-11-06T23:41:55.835114911Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:41:55.835272 dockerd[2435]: time="2025-11-06T23:41:55.835230712Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Nov 6 23:41:55.835403 dockerd[2435]: time="2025-11-06T23:41:55.835376714Z" level=info msg="Daemon has completed initialization" Nov 6 23:41:55.883913 dockerd[2435]: time="2025-11-06T23:41:55.883856113Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:41:55.884240 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:41:56.657133 containerd[1753]: time="2025-11-06T23:41:56.657090678Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 23:41:57.570618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943101428.mount: Deactivated successfully. Nov 6 23:41:58.923090 containerd[1753]: time="2025-11-06T23:41:58.923032930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:41:58.925585 containerd[1753]: time="2025-11-06T23:41:58.925082347Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Nov 6 23:41:58.931270 containerd[1753]: time="2025-11-06T23:41:58.930052288Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:41:58.934339 containerd[1753]: time="2025-11-06T23:41:58.934291823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:41:58.935313 containerd[1753]: time="2025-11-06T23:41:58.935267231Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.278134553s" Nov 6 23:41:58.935397 containerd[1753]: time="2025-11-06T23:41:58.935324032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 23:41:58.936545 containerd[1753]: time="2025-11-06T23:41:58.936521041Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 23:42:00.395848 containerd[1753]: time="2025-11-06T23:42:00.395785098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:00.398064 containerd[1753]: time="2025-11-06T23:42:00.397834015Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Nov 6 23:42:00.403072 containerd[1753]: time="2025-11-06T23:42:00.401443844Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:00.408835 containerd[1753]: time="2025-11-06T23:42:00.408440501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:00.409919 containerd[1753]: time="2025-11-06T23:42:00.409402609Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.472849567s" Nov 6 23:42:00.409919 containerd[1753]: time="2025-11-06T23:42:00.409438109Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 23:42:00.410321 containerd[1753]: time="2025-11-06T23:42:00.410280716Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 23:42:01.596822 containerd[1753]: time="2025-11-06T23:42:01.596765362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:01.598911 containerd[1753]: time="2025-11-06T23:42:01.598843979Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Nov 6 23:42:01.601461 containerd[1753]: time="2025-11-06T23:42:01.601405000Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:01.606541 containerd[1753]: time="2025-11-06T23:42:01.606224639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:01.607260 containerd[1753]: time="2025-11-06T23:42:01.607219047Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.196893431s" Nov 6 23:42:01.607364 containerd[1753]: time="2025-11-06T23:42:01.607263347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 23:42:01.607775 containerd[1753]: time="2025-11-06T23:42:01.607747651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 23:42:02.830787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685751069.mount: Deactivated successfully. Nov 6 23:42:03.239131 containerd[1753]: time="2025-11-06T23:42:03.238992813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:03.241675 containerd[1753]: time="2025-11-06T23:42:03.241616834Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Nov 6 23:42:03.245199 containerd[1753]: time="2025-11-06T23:42:03.245145663Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:03.249195 containerd[1753]: time="2025-11-06T23:42:03.249146596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:03.249910 containerd[1753]: time="2025-11-06T23:42:03.249736700Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.641950049s" Nov 6 23:42:03.249910 containerd[1753]: time="2025-11-06T23:42:03.249771401Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 23:42:03.250624 containerd[1753]: time="2025-11-06T23:42:03.250596807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 23:42:03.890034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370823521.mount: Deactivated successfully. Nov 6 23:42:04.540900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 6 23:42:04.546558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:42:05.092529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:42:05.099617 (kubelet)[2758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:42:05.374225 kubelet[2758]: E1106 23:42:05.374081 2758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:42:05.376734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:42:05.376964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:42:05.377424 systemd[1]: kubelet.service: Consumed 155ms CPU time, 109.9M memory peak. Nov 6 23:42:05.955668 containerd[1753]: time="2025-11-06T23:42:05.955611899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:05.959363 containerd[1753]: time="2025-11-06T23:42:05.959283529Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Nov 6 23:42:05.962185 containerd[1753]: time="2025-11-06T23:42:05.962130752Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:05.966959 containerd[1753]: time="2025-11-06T23:42:05.966637788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:05.967703 containerd[1753]: time="2025-11-06T23:42:05.967669097Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.717038289s" Nov 6 23:42:05.967781 containerd[1753]: time="2025-11-06T23:42:05.967709397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 23:42:05.968323 containerd[1753]: time="2025-11-06T23:42:05.968282302Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 23:42:06.509414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782635791.mount: Deactivated successfully. Nov 6 23:42:06.526469 containerd[1753]: time="2025-11-06T23:42:06.526423039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:06.528864 containerd[1753]: time="2025-11-06T23:42:06.528696058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Nov 6 23:42:06.532329 containerd[1753]: time="2025-11-06T23:42:06.531335179Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:06.535415 containerd[1753]: time="2025-11-06T23:42:06.535369212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:06.536755 containerd[1753]: time="2025-11-06T23:42:06.536148419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 567.818616ms" Nov 6 23:42:06.536755 containerd[1753]: time="2025-11-06T23:42:06.536183219Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 23:42:06.536888 containerd[1753]: time="2025-11-06T23:42:06.536793424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 23:42:10.059468 containerd[1753]: time="2025-11-06T23:42:10.059409639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:10.062144 containerd[1753]: time="2025-11-06T23:42:10.061925859Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Nov 6 23:42:10.065276 containerd[1753]: time="2025-11-06T23:42:10.064907484Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:10.070616 containerd[1753]: time="2025-11-06T23:42:10.070575030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:10.071815 containerd[1753]: time="2025-11-06T23:42:10.071781639Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.534960215s" Nov 6 23:42:10.071944 containerd[1753]: time="2025-11-06T23:42:10.071925641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 23:42:14.767161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:42:14.767872 systemd[1]: kubelet.service: Consumed 155ms CPU time, 109.9M memory peak. Nov 6 23:42:14.773568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:42:14.820985 systemd[1]: Reload requested from client PID 2850 ('systemctl') (unit session-9.scope)... Nov 6 23:42:14.821007 systemd[1]: Reloading... Nov 6 23:42:14.969326 zram_generator::config[2903]: No configuration found. Nov 6 23:42:15.089338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:42:15.204427 systemd[1]: Reloading finished in 382 ms. Nov 6 23:42:15.844459 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 23:42:15.844573 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 23:42:15.844890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:42:15.844949 systemd[1]: kubelet.service: Consumed 108ms CPU time, 92.7M memory peak. Nov 6 23:42:15.856981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:42:15.976211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:42:15.981067 (kubelet)[2964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:42:16.017532 kubelet[2964]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:42:16.017532 kubelet[2964]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:42:16.017940 kubelet[2964]: I1106 23:42:16.017580 2964 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:42:16.870201 kubelet[2964]: I1106 23:42:16.870152 2964 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 23:42:16.870201 kubelet[2964]: I1106 23:42:16.870180 2964 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:42:16.870201 kubelet[2964]: I1106 23:42:16.870207 2964 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 23:42:16.870201 kubelet[2964]: I1106 23:42:16.870214 2964 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:42:16.870540 kubelet[2964]: I1106 23:42:16.870531 2964 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:42:16.878436 kubelet[2964]: E1106 23:42:16.878400 2964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 23:42:16.879047 kubelet[2964]: I1106 23:42:16.879008 2964 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:42:16.886194 kubelet[2964]: E1106 23:42:16.886145 2964 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:42:16.886311 kubelet[2964]: I1106 23:42:16.886218 2964 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 6 23:42:16.889573 kubelet[2964]: I1106 23:42:16.889551 2964 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 23:42:16.889779 kubelet[2964]: I1106 23:42:16.889752 2964 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:42:16.889934 kubelet[2964]: I1106 23:42:16.889777 2964 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-c920fca088","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:42:16.890081 kubelet[2964]: I1106 23:42:16.889939 2964 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:42:16.890081 kubelet[2964]: I1106 23:42:16.889952 2964 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 23:42:16.890081 kubelet[2964]: I1106 23:42:16.890047 2964 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 23:42:16.897778 kubelet[2964]: I1106 23:42:16.897756 2964 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:42:16.899792 kubelet[2964]: I1106 23:42:16.899771 2964 kubelet.go:475] "Attempting to sync node with API server" Nov 6 23:42:16.899792 kubelet[2964]: I1106 23:42:16.899795 2964 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:42:16.899916 kubelet[2964]: I1106 23:42:16.899821 2964 kubelet.go:387] "Adding apiserver pod source" Nov 6 23:42:16.899916 kubelet[2964]: I1106 23:42:16.899845 2964 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:42:16.903438 kubelet[2964]: E1106 23:42:16.902151 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:42:16.903438 kubelet[2964]: E1106 23:42:16.902290 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-c920fca088&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:42:16.903554 kubelet[2964]: I1106 23:42:16.903541 2964 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:42:16.904676 kubelet[2964]: I1106 23:42:16.904181 2964 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:42:16.904676 kubelet[2964]: I1106 23:42:16.904228 2964 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 23:42:16.904676 kubelet[2964]: W1106 23:42:16.904279 2964 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:42:16.907761 kubelet[2964]: I1106 23:42:16.907740 2964 server.go:1262] "Started kubelet" Nov 6 23:42:16.909322 kubelet[2964]: I1106 23:42:16.908110 2964 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:42:16.909322 kubelet[2964]: I1106 23:42:16.909046 2964 server.go:310] "Adding debug handlers to kubelet server" Nov 6 23:42:16.912315 kubelet[2964]: I1106 23:42:16.911663 2964 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:42:16.912315 kubelet[2964]: I1106 23:42:16.911733 2964 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 23:42:16.912315 kubelet[2964]: I1106 23:42:16.912064 2964 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:42:16.915342 kubelet[2964]: I1106 23:42:16.913706 2964 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:42:16.915342 kubelet[2964]: E1106 23:42:16.912211 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.4-n-c920fca088.18758f6f30c40f8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.4-n-c920fca088,UID:ci-4230.2.4-n-c920fca088,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.4-n-c920fca088,},FirstTimestamp:2025-11-06 23:42:16.907714446 +0000 UTC m=+0.923356612,LastTimestamp:2025-11-06 23:42:16.907714446 +0000 UTC m=+0.923356612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.4-n-c920fca088,}" Nov 6 23:42:16.915342 kubelet[2964]: I1106 23:42:16.913782 2964 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:42:16.915342 kubelet[2964]: I1106 23:42:16.915157 2964 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 23:42:16.917396 kubelet[2964]: I1106 23:42:16.917376 2964 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 23:42:16.917475 kubelet[2964]: I1106 23:42:16.917428 2964 reconciler.go:29] "Reconciler: start to sync state" Nov 6 23:42:16.918088 kubelet[2964]: E1106 23:42:16.918058 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:42:16.918391 kubelet[2964]: E1106 23:42:16.918365 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-c920fca088\" not found" Nov 6 23:42:16.918486 kubelet[2964]: E1106 23:42:16.918460 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-c920fca088?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Nov 6 23:42:16.919217 kubelet[2964]: I1106 23:42:16.919189 2964 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:42:16.919286 kubelet[2964]: I1106 23:42:16.919268 2964 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:42:16.920880 kubelet[2964]: E1106 23:42:16.920857 2964 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:42:16.920988 kubelet[2964]: I1106 23:42:16.920970 2964 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:42:16.954521 kubelet[2964]: I1106 23:42:16.954497 2964 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:42:16.954521 kubelet[2964]: I1106 23:42:16.954519 2964 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:42:16.954668 kubelet[2964]: I1106 23:42:16.954543 2964 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:42:16.959180 kubelet[2964]: I1106 23:42:16.959156 2964 policy_none.go:49] "None policy: Start" Nov 6 23:42:16.959180 kubelet[2964]: I1106 23:42:16.959187 2964 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 23:42:16.959342 kubelet[2964]: I1106 23:42:16.959204 2964 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 23:42:16.963329 kubelet[2964]: I1106 23:42:16.963310 2964 policy_none.go:47] "Start" Nov 6 23:42:16.967857 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:42:16.977546 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:42:16.990139 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:42:16.993320 kubelet[2964]: E1106 23:42:16.992888 2964 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:42:16.993320 kubelet[2964]: I1106 23:42:16.993126 2964 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:42:16.993320 kubelet[2964]: I1106 23:42:16.993144 2964 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:42:16.996317 kubelet[2964]: I1106 23:42:16.993930 2964 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:42:16.997767 kubelet[2964]: E1106 23:42:16.997743 2964 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:42:16.997847 kubelet[2964]: E1106 23:42:16.997792 2964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.4-n-c920fca088\" not found" Nov 6 23:42:16.999627 kubelet[2964]: I1106 23:42:16.999597 2964 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 23:42:17.001431 kubelet[2964]: I1106 23:42:17.001401 2964 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 23:42:17.001431 kubelet[2964]: I1106 23:42:17.001430 2964 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 23:42:17.001539 kubelet[2964]: I1106 23:42:17.001454 2964 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 23:42:17.001539 kubelet[2964]: E1106 23:42:17.001494 2964 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 6 23:42:17.002910 kubelet[2964]: E1106 23:42:17.002884 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:42:17.095603 kubelet[2964]: I1106 23:42:17.095570 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.096071 kubelet[2964]: E1106 23:42:17.096037 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.113767 systemd[1]: Created slice kubepods-burstable-pod662b1b2a78d8a3fbd1ec6444f6c30966.slice - libcontainer container kubepods-burstable-pod662b1b2a78d8a3fbd1ec6444f6c30966.slice. Nov 6 23:42:17.119227 kubelet[2964]: E1106 23:42:17.119186 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-c920fca088?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Nov 6 23:42:17.123357 kubelet[2964]: E1106 23:42:17.121338 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.127193 systemd[1]: Created slice kubepods-burstable-pod1e66ed0cd45e88c9bd5be278cf6ed0c6.slice - libcontainer container kubepods-burstable-pod1e66ed0cd45e88c9bd5be278cf6ed0c6.slice. Nov 6 23:42:17.128908 kubelet[2964]: E1106 23:42:17.128878 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.141795 systemd[1]: Created slice kubepods-burstable-poda05a3f9ae5fbf8a271b913ff150e3e53.slice - libcontainer container kubepods-burstable-poda05a3f9ae5fbf8a271b913ff150e3e53.slice. Nov 6 23:42:17.143485 kubelet[2964]: E1106 23:42:17.143464 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218628 kubelet[2964]: I1106 23:42:17.218589 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/662b1b2a78d8a3fbd1ec6444f6c30966-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-c920fca088\" (UID: \"662b1b2a78d8a3fbd1ec6444f6c30966\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218628 kubelet[2964]: I1106 23:42:17.218628 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e66ed0cd45e88c9bd5be278cf6ed0c6-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-c920fca088\" (UID: \"1e66ed0cd45e88c9bd5be278cf6ed0c6\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218628 kubelet[2964]: I1106 23:42:17.218655 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218967 kubelet[2964]: I1106 23:42:17.218676 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218967 kubelet[2964]: I1106 23:42:17.218698 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e66ed0cd45e88c9bd5be278cf6ed0c6-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-c920fca088\" (UID: \"1e66ed0cd45e88c9bd5be278cf6ed0c6\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218967 kubelet[2964]: I1106 23:42:17.218717 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e66ed0cd45e88c9bd5be278cf6ed0c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-c920fca088\" (UID: \"1e66ed0cd45e88c9bd5be278cf6ed0c6\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218967 kubelet[2964]: I1106 23:42:17.218738 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.218967 kubelet[2964]: I1106 23:42:17.218758 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.219096 kubelet[2964]: I1106 23:42:17.218801 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.298811 kubelet[2964]: I1106 23:42:17.298779 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.299180 kubelet[2964]: E1106 23:42:17.299150 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.427499 containerd[1753]: time="2025-11-06T23:42:17.427375717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-c920fca088,Uid:662b1b2a78d8a3fbd1ec6444f6c30966,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:17.433700 containerd[1753]: time="2025-11-06T23:42:17.433669268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-c920fca088,Uid:1e66ed0cd45e88c9bd5be278cf6ed0c6,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:17.450388 containerd[1753]: time="2025-11-06T23:42:17.450342302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-c920fca088,Uid:a05a3f9ae5fbf8a271b913ff150e3e53,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:17.520144 kubelet[2964]: E1106 23:42:17.520092 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-c920fca088?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Nov 6 23:42:17.575705 kubelet[2964]: E1106 23:42:17.575601 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.4-n-c920fca088.18758f6f30c40f8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.4-n-c920fca088,UID:ci-4230.2.4-n-c920fca088,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.4-n-c920fca088,},FirstTimestamp:2025-11-06 23:42:16.907714446 +0000 UTC m=+0.923356612,LastTimestamp:2025-11-06 23:42:16.907714446 +0000 UTC m=+0.923356612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.4-n-c920fca088,}" Nov 6 23:42:17.700827 kubelet[2964]: I1106 23:42:17.700717 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.701112 kubelet[2964]: E1106 23:42:17.701085 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:17.710035 kubelet[2964]: E1106 23:42:17.709990 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-c920fca088&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:42:17.722753 kubelet[2964]: E1106 23:42:17.722708 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:42:17.736452 kubelet[2964]: E1106 23:42:17.736411 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:42:17.867459 kubelet[2964]: E1106 23:42:17.867416 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:42:17.971247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235304246.mount: Deactivated successfully. Nov 6 23:42:17.991761 containerd[1753]: time="2025-11-06T23:42:17.991706047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:42:18.001566 containerd[1753]: time="2025-11-06T23:42:18.001423725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 6 23:42:18.004693 containerd[1753]: time="2025-11-06T23:42:18.004659751Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:42:18.007879 containerd[1753]: time="2025-11-06T23:42:18.007841476Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:42:18.015139 containerd[1753]: time="2025-11-06T23:42:18.015077035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:42:18.019753 containerd[1753]: time="2025-11-06T23:42:18.019718572Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:42:18.023960 containerd[1753]: time="2025-11-06T23:42:18.023045298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:42:18.024152 containerd[1753]: time="2025-11-06T23:42:18.024121307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.641089ms" Nov 6 23:42:18.026543 containerd[1753]: time="2025-11-06T23:42:18.026513326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.759558ms" Nov 6 23:42:18.034561 containerd[1753]: time="2025-11-06T23:42:18.034476790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:42:18.085542 containerd[1753]: time="2025-11-06T23:42:18.085493400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.049798ms" Nov 6 23:42:18.321474 kubelet[2964]: E1106 23:42:18.321429 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-c920fca088?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Nov 6 23:42:18.503248 kubelet[2964]: I1106 23:42:18.503220 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:18.506189 kubelet[2964]: E1106 23:42:18.505983 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:18.764818 containerd[1753]: time="2025-11-06T23:42:18.764406349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:18.764818 containerd[1753]: time="2025-11-06T23:42:18.764470150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:18.764818 containerd[1753]: time="2025-11-06T23:42:18.764491950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:18.764818 containerd[1753]: time="2025-11-06T23:42:18.764582751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:18.765576 containerd[1753]: time="2025-11-06T23:42:18.765349657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:18.765576 containerd[1753]: time="2025-11-06T23:42:18.765407057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:18.765576 containerd[1753]: time="2025-11-06T23:42:18.765424657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:18.765576 containerd[1753]: time="2025-11-06T23:42:18.765514958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:18.768192 containerd[1753]: time="2025-11-06T23:42:18.767267572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:18.768192 containerd[1753]: time="2025-11-06T23:42:18.767337773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:18.768192 containerd[1753]: time="2025-11-06T23:42:18.767358973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:18.768192 containerd[1753]: time="2025-11-06T23:42:18.767439173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:18.819499 systemd[1]: Started cri-containerd-8e65f2221952b9fcef987ab6b995ddd2171952352ea789fc1609057bae9dce50.scope - libcontainer container 8e65f2221952b9fcef987ab6b995ddd2171952352ea789fc1609057bae9dce50. Nov 6 23:42:18.826599 systemd[1]: Started cri-containerd-1390e7c6845900dd95fe2b24a6e89352b293d0f0732d0f0b3f79dd34309be48d.scope - libcontainer container 1390e7c6845900dd95fe2b24a6e89352b293d0f0732d0f0b3f79dd34309be48d. Nov 6 23:42:18.829406 systemd[1]: Started cri-containerd-51c9a12e15daa85e9a1470be57b42a42d12bada2c87e7b8260d5e4b32dffb7f1.scope - libcontainer container 51c9a12e15daa85e9a1470be57b42a42d12bada2c87e7b8260d5e4b32dffb7f1. Nov 6 23:42:18.902250 containerd[1753]: time="2025-11-06T23:42:18.902187455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-c920fca088,Uid:662b1b2a78d8a3fbd1ec6444f6c30966,Namespace:kube-system,Attempt:0,} returns sandbox id \"1390e7c6845900dd95fe2b24a6e89352b293d0f0732d0f0b3f79dd34309be48d\"" Nov 6 23:42:18.909862 containerd[1753]: time="2025-11-06T23:42:18.909677115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-c920fca088,Uid:a05a3f9ae5fbf8a271b913ff150e3e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e65f2221952b9fcef987ab6b995ddd2171952352ea789fc1609057bae9dce50\"" Nov 6 23:42:18.915007 containerd[1753]: time="2025-11-06T23:42:18.914910257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-c920fca088,Uid:1e66ed0cd45e88c9bd5be278cf6ed0c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c9a12e15daa85e9a1470be57b42a42d12bada2c87e7b8260d5e4b32dffb7f1\"" Nov 6 23:42:18.919341 containerd[1753]: time="2025-11-06T23:42:18.919295492Z" level=info msg="CreateContainer within sandbox \"1390e7c6845900dd95fe2b24a6e89352b293d0f0732d0f0b3f79dd34309be48d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:42:18.923951 containerd[1753]: time="2025-11-06T23:42:18.923910829Z" level=info msg="CreateContainer within sandbox \"8e65f2221952b9fcef987ab6b995ddd2171952352ea789fc1609057bae9dce50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:42:18.928065 containerd[1753]: time="2025-11-06T23:42:18.927961362Z" level=info msg="CreateContainer within sandbox \"51c9a12e15daa85e9a1470be57b42a42d12bada2c87e7b8260d5e4b32dffb7f1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:42:18.976712 containerd[1753]: time="2025-11-06T23:42:18.976672953Z" level=info msg="CreateContainer within sandbox \"1390e7c6845900dd95fe2b24a6e89352b293d0f0732d0f0b3f79dd34309be48d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d2d059a617f0dc9afe7ad9bdba916c7fe666cdfff4aec4ab42d0873ef51b14e\"" Nov 6 23:42:18.977658 containerd[1753]: time="2025-11-06T23:42:18.977621561Z" level=info msg="StartContainer for \"7d2d059a617f0dc9afe7ad9bdba916c7fe666cdfff4aec4ab42d0873ef51b14e\"" Nov 6 23:42:18.998657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029713390.mount: Deactivated successfully. Nov 6 23:42:19.005069 containerd[1753]: time="2025-11-06T23:42:19.005017780Z" level=info msg="CreateContainer within sandbox \"8e65f2221952b9fcef987ab6b995ddd2171952352ea789fc1609057bae9dce50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"13e13cfd893fe5f226126e0429b4e867ed2b478efe1ec1aeb5971dd1b4418b2c\"" Nov 6 23:42:19.007861 containerd[1753]: time="2025-11-06T23:42:19.007823903Z" level=info msg="StartContainer for \"13e13cfd893fe5f226126e0429b4e867ed2b478efe1ec1aeb5971dd1b4418b2c\"" Nov 6 23:42:19.022511 systemd[1]: Started cri-containerd-7d2d059a617f0dc9afe7ad9bdba916c7fe666cdfff4aec4ab42d0873ef51b14e.scope - libcontainer container 7d2d059a617f0dc9afe7ad9bdba916c7fe666cdfff4aec4ab42d0873ef51b14e. Nov 6 23:42:19.031135 kubelet[2964]: E1106 23:42:19.030601 2964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 23:42:19.037707 containerd[1753]: time="2025-11-06T23:42:19.037579542Z" level=info msg="CreateContainer within sandbox \"51c9a12e15daa85e9a1470be57b42a42d12bada2c87e7b8260d5e4b32dffb7f1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ce07e9b6aaae01830c03fdd13a15ec96b3d9587555f9d9ff6164da7079b31fd\"" Nov 6 23:42:19.038492 containerd[1753]: time="2025-11-06T23:42:19.038459949Z" level=info msg="StartContainer for \"9ce07e9b6aaae01830c03fdd13a15ec96b3d9587555f9d9ff6164da7079b31fd\"" Nov 6 23:42:19.071482 systemd[1]: Started cri-containerd-13e13cfd893fe5f226126e0429b4e867ed2b478efe1ec1aeb5971dd1b4418b2c.scope - libcontainer container 13e13cfd893fe5f226126e0429b4e867ed2b478efe1ec1aeb5971dd1b4418b2c. Nov 6 23:42:19.090532 systemd[1]: Started cri-containerd-9ce07e9b6aaae01830c03fdd13a15ec96b3d9587555f9d9ff6164da7079b31fd.scope - libcontainer container 9ce07e9b6aaae01830c03fdd13a15ec96b3d9587555f9d9ff6164da7079b31fd. Nov 6 23:42:19.134789 containerd[1753]: time="2025-11-06T23:42:19.134002116Z" level=info msg="StartContainer for \"7d2d059a617f0dc9afe7ad9bdba916c7fe666cdfff4aec4ab42d0873ef51b14e\" returns successfully" Nov 6 23:42:19.149685 containerd[1753]: time="2025-11-06T23:42:19.149641741Z" level=info msg="StartContainer for \"13e13cfd893fe5f226126e0429b4e867ed2b478efe1ec1aeb5971dd1b4418b2c\" returns successfully" Nov 6 23:42:19.189404 containerd[1753]: time="2025-11-06T23:42:19.189245759Z" level=info msg="StartContainer for \"9ce07e9b6aaae01830c03fdd13a15ec96b3d9587555f9d9ff6164da7079b31fd\" returns successfully" Nov 6 23:42:20.033539 kubelet[2964]: E1106 23:42:20.033501 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:20.036363 kubelet[2964]: E1106 23:42:20.036103 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:20.038930 kubelet[2964]: E1106 23:42:20.038876 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:20.109565 kubelet[2964]: I1106 23:42:20.109531 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.042044 kubelet[2964]: E1106 23:42:21.041852 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.043673 kubelet[2964]: E1106 23:42:21.043091 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.043673 kubelet[2964]: E1106 23:42:21.043483 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.664911 kubelet[2964]: E1106 23:42:21.664860 2964 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.4-n-c920fca088\" not found" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.800915 kubelet[2964]: I1106 23:42:21.800878 2964 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.800915 kubelet[2964]: E1106 23:42:21.800921 2964 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4230.2.4-n-c920fca088\": node \"ci-4230.2.4-n-c920fca088\" not found" Nov 6 23:42:21.818668 kubelet[2964]: I1106 23:42:21.818639 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.856664 kubelet[2964]: E1106 23:42:21.856426 2964 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.4-n-c920fca088\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.856664 kubelet[2964]: I1106 23:42:21.856459 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.858552 kubelet[2964]: E1106 23:42:21.858352 2964 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.858552 kubelet[2964]: I1106 23:42:21.858379 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.861102 kubelet[2964]: E1106 23:42:21.861070 2964 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.4-n-c920fca088\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" Nov 6 23:42:21.904507 kubelet[2964]: I1106 23:42:21.904363 2964 apiserver.go:52] "Watching apiserver" Nov 6 23:42:21.919135 kubelet[2964]: I1106 23:42:21.918467 2964 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 23:42:22.042546 kubelet[2964]: I1106 23:42:22.041629 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:22.043530 kubelet[2964]: I1106 23:42:22.043271 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" Nov 6 23:42:22.046281 kubelet[2964]: E1106 23:42:22.046231 2964 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.4-n-c920fca088\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:22.048081 kubelet[2964]: E1106 23:42:22.047833 2964 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.4-n-c920fca088\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" Nov 6 23:42:23.044318 kubelet[2964]: I1106 23:42:23.044274 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:23.052175 kubelet[2964]: I1106 23:42:23.052135 2964 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:42:23.644280 systemd[1]: Reload requested from client PID 3253 ('systemctl') (unit session-9.scope)... Nov 6 23:42:23.644417 systemd[1]: Reloading... Nov 6 23:42:23.770337 zram_generator::config[3306]: No configuration found. Nov 6 23:42:23.896662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:42:24.033351 systemd[1]: Reloading finished in 388 ms. Nov 6 23:42:24.069002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:42:24.080731 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:42:24.081143 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:42:24.081216 systemd[1]: kubelet.service: Consumed 1.310s CPU time, 126.3M memory peak. Nov 6 23:42:24.092563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:42:24.721413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:42:24.732692 (kubelet)[3367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:42:24.777917 kubelet[3367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:42:24.778666 kubelet[3367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:42:24.778666 kubelet[3367]: I1106 23:42:24.778479 3367 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:42:24.784699 kubelet[3367]: I1106 23:42:24.784668 3367 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 23:42:24.784699 kubelet[3367]: I1106 23:42:24.784690 3367 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:42:24.784850 kubelet[3367]: I1106 23:42:24.784715 3367 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 23:42:24.784850 kubelet[3367]: I1106 23:42:24.784723 3367 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:42:24.784972 kubelet[3367]: I1106 23:42:24.784952 3367 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:42:24.786083 kubelet[3367]: I1106 23:42:24.786058 3367 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 23:42:25.645668 kubelet[3367]: I1106 23:42:25.645616 3367 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:42:25.648930 kubelet[3367]: E1106 23:42:25.648873 3367 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:42:25.649097 kubelet[3367]: I1106 23:42:25.648939 3367 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 6 23:42:25.653313 kubelet[3367]: I1106 23:42:25.653268 3367 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 23:42:25.653537 kubelet[3367]: I1106 23:42:25.653514 3367 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:42:25.653920 kubelet[3367]: I1106 23:42:25.653537 3367 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-c920fca088","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:42:25.653920 kubelet[3367]: I1106 23:42:25.653863 3367 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:42:25.653920 kubelet[3367]: I1106 23:42:25.653886 3367 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 23:42:25.653920 kubelet[3367]: I1106 23:42:25.653915 3367 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 23:42:25.656213 kubelet[3367]: I1106 23:42:25.655519 3367 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:42:25.656213 kubelet[3367]: I1106 23:42:25.655690 3367 kubelet.go:475] "Attempting to sync node with API server" Nov 6 23:42:25.656213 kubelet[3367]: I1106 23:42:25.655703 3367 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:42:25.656213 kubelet[3367]: I1106 23:42:25.655732 3367 kubelet.go:387] "Adding apiserver pod source" Nov 6 23:42:25.656213 kubelet[3367]: I1106 23:42:25.655755 3367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:42:25.658373 kubelet[3367]: I1106 23:42:25.658353 3367 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:42:25.659934 kubelet[3367]: I1106 23:42:25.659889 3367 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:42:25.660109 kubelet[3367]: I1106 23:42:25.660080 3367 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 23:42:25.668442 kubelet[3367]: I1106 23:42:25.668429 3367 server.go:1262] "Started kubelet" Nov 6 23:42:25.671757 kubelet[3367]: I1106 23:42:25.671569 3367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:42:25.682625 kubelet[3367]: I1106 23:42:25.682448 3367 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:42:25.686442 kubelet[3367]: I1106 23:42:25.686412 3367 server.go:310] "Adding debug handlers to kubelet server" Nov 6 23:42:25.693567 kubelet[3367]: I1106 23:42:25.693531 3367 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:42:25.693647 kubelet[3367]: I1106 23:42:25.693588 3367 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 23:42:25.693981 kubelet[3367]: I1106 23:42:25.693752 3367 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:42:25.694057 kubelet[3367]: I1106 23:42:25.694008 3367 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:42:25.695899 kubelet[3367]: I1106 23:42:25.695787 3367 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 23:42:25.696064 kubelet[3367]: E1106 23:42:25.696001 3367 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-c920fca088\" not found" Nov 6 23:42:25.697637 kubelet[3367]: I1106 23:42:25.697531 3367 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 23:42:25.697713 kubelet[3367]: I1106 23:42:25.697655 3367 reconciler.go:29] "Reconciler: start to sync state" Nov 6 23:42:25.702745 kubelet[3367]: E1106 23:42:25.701807 3367 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:42:25.702745 kubelet[3367]: I1106 23:42:25.702046 3367 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:42:25.702745 kubelet[3367]: I1106 23:42:25.702145 3367 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:42:25.708105 kubelet[3367]: I1106 23:42:25.708080 3367 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:42:25.717781 kubelet[3367]: I1106 23:42:25.717750 3367 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 23:42:25.721477 kubelet[3367]: I1106 23:42:25.721454 3367 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 23:42:25.721477 kubelet[3367]: I1106 23:42:25.721478 3367 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 23:42:25.721705 kubelet[3367]: I1106 23:42:25.721502 3367 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 23:42:25.721705 kubelet[3367]: E1106 23:42:25.721544 3367 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:42:25.767325 kubelet[3367]: I1106 23:42:25.767067 3367 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:42:25.767325 kubelet[3367]: I1106 23:42:25.767087 3367 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:42:25.767325 kubelet[3367]: I1106 23:42:25.767108 3367 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:42:25.767325 kubelet[3367]: I1106 23:42:25.767238 3367 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:42:25.767325 kubelet[3367]: I1106 23:42:25.767253 3367 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:42:25.767325 kubelet[3367]: I1106 23:42:25.767274 3367 policy_none.go:49] "None policy: Start" Nov 6 23:42:25.767325 kubelet[3367]: I1106 23:42:25.767287 3367 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 23:42:25.767686 kubelet[3367]: I1106 23:42:25.767368 3367 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 23:42:25.767686 kubelet[3367]: I1106 23:42:25.767488 3367 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 23:42:25.767686 kubelet[3367]: I1106 23:42:25.767499 3367 policy_none.go:47] "Start" Nov 6 23:42:25.772057 kubelet[3367]: E1106 23:42:25.772033 3367 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:42:25.772400 kubelet[3367]: I1106 23:42:25.772200 3367 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:42:25.772400 kubelet[3367]: I1106 23:42:25.772220 3367 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:42:25.773204 kubelet[3367]: I1106 23:42:25.772878 3367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:42:25.775154 kubelet[3367]: E1106 23:42:25.775132 3367 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:42:25.794868 sudo[3403]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:42:25.795247 sudo[3403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:42:25.823977 kubelet[3367]: I1106 23:42:25.822729 3367 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.823977 kubelet[3367]: I1106 23:42:25.823055 3367 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.823977 kubelet[3367]: I1106 23:42:25.823154 3367 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.833387 kubelet[3367]: I1106 23:42:25.833358 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:42:25.840734 kubelet[3367]: I1106 23:42:25.840334 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:42:25.841182 kubelet[3367]: I1106 23:42:25.840151 3367 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:42:25.841182 kubelet[3367]: E1106 23:42:25.841025 3367 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.4-n-c920fca088\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.875247 kubelet[3367]: I1106 23:42:25.875217 3367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.887952 kubelet[3367]: I1106 23:42:25.887919 3367 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.888097 kubelet[3367]: I1106 23:42:25.888016 3367 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.898588 kubelet[3367]: I1106 23:42:25.898486 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e66ed0cd45e88c9bd5be278cf6ed0c6-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-c920fca088\" (UID: \"1e66ed0cd45e88c9bd5be278cf6ed0c6\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.898588 kubelet[3367]: I1106 23:42:25.898525 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e66ed0cd45e88c9bd5be278cf6ed0c6-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-c920fca088\" (UID: \"1e66ed0cd45e88c9bd5be278cf6ed0c6\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:25.898588 kubelet[3367]: I1106 23:42:25.898548 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e66ed0cd45e88c9bd5be278cf6ed0c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-c920fca088\" (UID: \"1e66ed0cd45e88c9bd5be278cf6ed0c6\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" Nov 6 23:42:26.000018 kubelet[3367]: I1106 23:42:25.999327 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:26.000018 kubelet[3367]: I1106 23:42:25.999383 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:26.000018 kubelet[3367]: I1106 23:42:25.999445 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:26.000018 kubelet[3367]: I1106 23:42:25.999468 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:26.000018 kubelet[3367]: I1106 23:42:25.999490 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a05a3f9ae5fbf8a271b913ff150e3e53-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-c920fca088\" (UID: \"a05a3f9ae5fbf8a271b913ff150e3e53\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" Nov 6 23:42:26.000373 kubelet[3367]: I1106 23:42:25.999530 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/662b1b2a78d8a3fbd1ec6444f6c30966-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-c920fca088\" (UID: \"662b1b2a78d8a3fbd1ec6444f6c30966\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" Nov 6 23:42:26.337024 sudo[3403]: pam_unix(sudo:session): session closed for user root Nov 6 23:42:26.662332 kubelet[3367]: I1106 23:42:26.662152 3367 apiserver.go:52] "Watching apiserver" Nov 6 23:42:26.698341 kubelet[3367]: I1106 23:42:26.697635 3367 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 23:42:26.801230 kubelet[3367]: I1106 23:42:26.800652 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.4-n-c920fca088" podStartSLOduration=1.8006313459999999 podStartE2EDuration="1.800631346s" podCreationTimestamp="2025-11-06 23:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:42:26.782484898 +0000 UTC m=+2.044206684" watchObservedRunningTime="2025-11-06 23:42:26.800631346 +0000 UTC m=+2.062353032" Nov 6 23:42:26.832807 kubelet[3367]: I1106 23:42:26.832129 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.4-n-c920fca088" podStartSLOduration=3.832083103 podStartE2EDuration="3.832083103s" podCreationTimestamp="2025-11-06 23:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:42:26.803021166 +0000 UTC m=+2.064742952" watchObservedRunningTime="2025-11-06 23:42:26.832083103 +0000 UTC m=+2.093804789" Nov 6 23:42:26.851691 kubelet[3367]: I1106 23:42:26.851198 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-c920fca088" podStartSLOduration=1.851173859 podStartE2EDuration="1.851173859s" podCreationTimestamp="2025-11-06 23:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:42:26.833692416 +0000 UTC m=+2.095414202" watchObservedRunningTime="2025-11-06 23:42:26.851173859 +0000 UTC m=+2.112895545" Nov 6 23:42:28.109813 sudo[2417]: pam_unix(sudo:session): session closed for user root Nov 6 23:42:28.212339 sshd[2416]: Connection closed by 10.200.16.10 port 49150 Nov 6 23:42:28.212275 sshd-session[2414]: pam_unix(sshd:session): session closed for user core Nov 6 23:42:28.215679 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:49150.service: Deactivated successfully. Nov 6 23:42:28.218066 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:42:28.218293 systemd[1]: session-9.scope: Consumed 6.634s CPU time, 265.6M memory peak. Nov 6 23:42:28.221063 systemd-logind[1729]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:42:28.222072 systemd-logind[1729]: Removed session 9. Nov 6 23:42:30.036169 kubelet[3367]: I1106 23:42:30.036125 3367 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:42:30.036802 kubelet[3367]: I1106 23:42:30.036693 3367 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:42:30.036904 containerd[1753]: time="2025-11-06T23:42:30.036500156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:42:31.029058 systemd[1]: Created slice kubepods-besteffort-poda2ee161c_8be8_4361_8a01_213ddc7106c7.slice - libcontainer container kubepods-besteffort-poda2ee161c_8be8_4361_8a01_213ddc7106c7.slice. Nov 6 23:42:31.034436 kubelet[3367]: I1106 23:42:31.032533 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2ee161c-8be8-4361-8a01-213ddc7106c7-kube-proxy\") pod \"kube-proxy-htdh4\" (UID: \"a2ee161c-8be8-4361-8a01-213ddc7106c7\") " pod="kube-system/kube-proxy-htdh4" Nov 6 23:42:31.034436 kubelet[3367]: I1106 23:42:31.032575 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hms\" (UniqueName: \"kubernetes.io/projected/a2ee161c-8be8-4361-8a01-213ddc7106c7-kube-api-access-g4hms\") pod \"kube-proxy-htdh4\" (UID: \"a2ee161c-8be8-4361-8a01-213ddc7106c7\") " pod="kube-system/kube-proxy-htdh4" Nov 6 23:42:31.034436 kubelet[3367]: I1106 23:42:31.032603 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ee161c-8be8-4361-8a01-213ddc7106c7-xtables-lock\") pod \"kube-proxy-htdh4\" (UID: \"a2ee161c-8be8-4361-8a01-213ddc7106c7\") " pod="kube-system/kube-proxy-htdh4" Nov 6 23:42:31.034436 kubelet[3367]: I1106 23:42:31.032622 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ee161c-8be8-4361-8a01-213ddc7106c7-lib-modules\") pod \"kube-proxy-htdh4\" (UID: \"a2ee161c-8be8-4361-8a01-213ddc7106c7\") " pod="kube-system/kube-proxy-htdh4" Nov 6 23:42:31.049478 systemd[1]: Created slice kubepods-burstable-pod4fa8983e_5240_4db8_ae67_f06b36071332.slice - libcontainer container kubepods-burstable-pod4fa8983e_5240_4db8_ae67_f06b36071332.slice. Nov 6 23:42:31.133454 kubelet[3367]: I1106 23:42:31.133395 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cni-path\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.133454 kubelet[3367]: I1106 23:42:31.133447 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4fa8983e-5240-4db8-ae67-f06b36071332-clustermesh-secrets\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.133970 kubelet[3367]: I1106 23:42:31.133468 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-net\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.133970 kubelet[3367]: I1106 23:42:31.133488 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-kernel\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.133970 kubelet[3367]: I1106 23:42:31.133519 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-bpf-maps\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.133970 kubelet[3367]: I1106 23:42:31.133537 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-lib-modules\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.133970 kubelet[3367]: I1106 23:42:31.133555 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-xtables-lock\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.133970 kubelet[3367]: I1106 23:42:31.133576 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-config-path\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.134221 kubelet[3367]: I1106 23:42:31.133603 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-run\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.134221 kubelet[3367]: I1106 23:42:31.133626 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-hostproc\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.134221 kubelet[3367]: I1106 23:42:31.133644 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-cgroup\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.134221 kubelet[3367]: I1106 23:42:31.133663 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-hubble-tls\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.134221 kubelet[3367]: I1106 23:42:31.133695 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-etc-cni-netd\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.134221 kubelet[3367]: I1106 23:42:31.133714 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2t24\" (UniqueName: \"kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-kube-api-access-p2t24\") pod \"cilium-xh7pl\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " pod="kube-system/cilium-xh7pl" Nov 6 23:42:31.224227 systemd[1]: Created slice kubepods-besteffort-poda67a777f_7cf7_4b73_b036_1c1df94639f9.slice - libcontainer container kubepods-besteffort-poda67a777f_7cf7_4b73_b036_1c1df94639f9.slice. Nov 6 23:42:31.233946 kubelet[3367]: I1106 23:42:31.233904 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67a777f-7cf7-4b73-b036-1c1df94639f9-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-bxr6b\" (UID: \"a67a777f-7cf7-4b73-b036-1c1df94639f9\") " pod="kube-system/cilium-operator-6f9c7c5859-bxr6b" Nov 6 23:42:31.234108 kubelet[3367]: I1106 23:42:31.234079 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6qc\" (UniqueName: \"kubernetes.io/projected/a67a777f-7cf7-4b73-b036-1c1df94639f9-kube-api-access-xt6qc\") pod \"cilium-operator-6f9c7c5859-bxr6b\" (UID: \"a67a777f-7cf7-4b73-b036-1c1df94639f9\") " pod="kube-system/cilium-operator-6f9c7c5859-bxr6b" Nov 6 23:42:31.350500 containerd[1753]: time="2025-11-06T23:42:31.350449879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htdh4,Uid:a2ee161c-8be8-4361-8a01-213ddc7106c7,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:31.361226 containerd[1753]: time="2025-11-06T23:42:31.361178367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xh7pl,Uid:4fa8983e-5240-4db8-ae67-f06b36071332,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:31.398619 containerd[1753]: time="2025-11-06T23:42:31.398527172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:31.399190 containerd[1753]: time="2025-11-06T23:42:31.399042176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:31.401362 containerd[1753]: time="2025-11-06T23:42:31.400370787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:31.401362 containerd[1753]: time="2025-11-06T23:42:31.401205993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:31.412566 containerd[1753]: time="2025-11-06T23:42:31.412126483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:31.412566 containerd[1753]: time="2025-11-06T23:42:31.412193183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:31.412566 containerd[1753]: time="2025-11-06T23:42:31.412214283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:31.412566 containerd[1753]: time="2025-11-06T23:42:31.412312284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:31.427495 systemd[1]: Started cri-containerd-be7578ac4a13c6a4bd125d6a8d066689addbc2ef507d40829e3a975d4c4bac9a.scope - libcontainer container be7578ac4a13c6a4bd125d6a8d066689addbc2ef507d40829e3a975d4c4bac9a. Nov 6 23:42:31.442460 systemd[1]: Started cri-containerd-5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3.scope - libcontainer container 5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3. Nov 6 23:42:31.471155 containerd[1753]: time="2025-11-06T23:42:31.470223057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htdh4,Uid:a2ee161c-8be8-4361-8a01-213ddc7106c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"be7578ac4a13c6a4bd125d6a8d066689addbc2ef507d40829e3a975d4c4bac9a\"" Nov 6 23:42:31.478525 containerd[1753]: time="2025-11-06T23:42:31.478482424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xh7pl,Uid:4fa8983e-5240-4db8-ae67-f06b36071332,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\"" Nov 6 23:42:31.482496 containerd[1753]: time="2025-11-06T23:42:31.482443656Z" level=info msg="CreateContainer within sandbox \"be7578ac4a13c6a4bd125d6a8d066689addbc2ef507d40829e3a975d4c4bac9a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:42:31.483178 containerd[1753]: time="2025-11-06T23:42:31.482905060Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:42:31.523673 containerd[1753]: time="2025-11-06T23:42:31.523620893Z" level=info msg="CreateContainer within sandbox \"be7578ac4a13c6a4bd125d6a8d066689addbc2ef507d40829e3a975d4c4bac9a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14a81348eafa6b03021182c3ea2924a5a08e77b6336eb306954fba07045ca737\"" Nov 6 23:42:31.524438 containerd[1753]: time="2025-11-06T23:42:31.524391899Z" level=info msg="StartContainer for \"14a81348eafa6b03021182c3ea2924a5a08e77b6336eb306954fba07045ca737\"" Nov 6 23:42:31.535086 containerd[1753]: time="2025-11-06T23:42:31.534950885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-bxr6b,Uid:a67a777f-7cf7-4b73-b036-1c1df94639f9,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:31.557510 systemd[1]: Started cri-containerd-14a81348eafa6b03021182c3ea2924a5a08e77b6336eb306954fba07045ca737.scope - libcontainer container 14a81348eafa6b03021182c3ea2924a5a08e77b6336eb306954fba07045ca737. Nov 6 23:42:31.582912 containerd[1753]: time="2025-11-06T23:42:31.581969069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:31.582912 containerd[1753]: time="2025-11-06T23:42:31.582818276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:31.582912 containerd[1753]: time="2025-11-06T23:42:31.582838876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:31.583518 containerd[1753]: time="2025-11-06T23:42:31.582969277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:31.613585 systemd[1]: Started cri-containerd-a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7.scope - libcontainer container a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7. Nov 6 23:42:31.623325 containerd[1753]: time="2025-11-06T23:42:31.622742402Z" level=info msg="StartContainer for \"14a81348eafa6b03021182c3ea2924a5a08e77b6336eb306954fba07045ca737\" returns successfully" Nov 6 23:42:31.676226 containerd[1753]: time="2025-11-06T23:42:31.676186238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-bxr6b,Uid:a67a777f-7cf7-4b73-b036-1c1df94639f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7\"" Nov 6 23:42:32.285460 kubelet[3367]: I1106 23:42:32.285136 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-htdh4" podStartSLOduration=2.285119996 podStartE2EDuration="2.285119996s" podCreationTimestamp="2025-11-06 23:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:42:31.777243862 +0000 UTC m=+7.038965648" watchObservedRunningTime="2025-11-06 23:42:32.285119996 +0000 UTC m=+7.546841782" Nov 6 23:42:36.578992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681943183.mount: Deactivated successfully. Nov 6 23:42:38.780011 containerd[1753]: time="2025-11-06T23:42:38.779965520Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:38.782325 containerd[1753]: time="2025-11-06T23:42:38.782197838Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 23:42:38.784790 containerd[1753]: time="2025-11-06T23:42:38.784740459Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:38.786188 containerd[1753]: time="2025-11-06T23:42:38.786151470Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.301803898s" Nov 6 23:42:38.786280 containerd[1753]: time="2025-11-06T23:42:38.786193371Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 23:42:38.788602 containerd[1753]: time="2025-11-06T23:42:38.788568990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:42:38.794745 containerd[1753]: time="2025-11-06T23:42:38.794710240Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:42:38.818874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551708045.mount: Deactivated successfully. Nov 6 23:42:38.825630 containerd[1753]: time="2025-11-06T23:42:38.825588191Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\"" Nov 6 23:42:38.827367 containerd[1753]: time="2025-11-06T23:42:38.826425798Z" level=info msg="StartContainer for \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\"" Nov 6 23:42:38.858397 systemd[1]: run-containerd-runc-k8s.io-709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3-runc.5TijmY.mount: Deactivated successfully. Nov 6 23:42:38.865465 systemd[1]: Started cri-containerd-709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3.scope - libcontainer container 709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3. Nov 6 23:42:38.894747 containerd[1753]: time="2025-11-06T23:42:38.894699153Z" level=info msg="StartContainer for \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\" returns successfully" Nov 6 23:42:38.901876 systemd[1]: cri-containerd-709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3.scope: Deactivated successfully. Nov 6 23:42:39.814706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3-rootfs.mount: Deactivated successfully. Nov 6 23:42:42.599809 containerd[1753]: time="2025-11-06T23:42:42.599725486Z" level=info msg="shim disconnected" id=709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3 namespace=k8s.io Nov 6 23:42:42.599809 containerd[1753]: time="2025-11-06T23:42:42.599794287Z" level=warning msg="cleaning up after shim disconnected" id=709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3 namespace=k8s.io Nov 6 23:42:42.599809 containerd[1753]: time="2025-11-06T23:42:42.599805587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:42:42.613812 containerd[1753]: time="2025-11-06T23:42:42.613765297Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:42:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:42:42.801578 containerd[1753]: time="2025-11-06T23:42:42.801523883Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:42:42.876384 containerd[1753]: time="2025-11-06T23:42:42.876232374Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\"" Nov 6 23:42:42.878553 containerd[1753]: time="2025-11-06T23:42:42.877144181Z" level=info msg="StartContainer for \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\"" Nov 6 23:42:42.909473 systemd[1]: Started cri-containerd-5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944.scope - libcontainer container 5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944. Nov 6 23:42:42.939151 containerd[1753]: time="2025-11-06T23:42:42.939107571Z" level=info msg="StartContainer for \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\" returns successfully" Nov 6 23:42:42.948740 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:42:42.949364 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:42:42.949820 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:42:42.956132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:42:42.956966 systemd[1]: cri-containerd-5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944.scope: Deactivated successfully. Nov 6 23:42:42.977230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:42:43.003487 containerd[1753]: time="2025-11-06T23:42:43.002902976Z" level=info msg="shim disconnected" id=5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944 namespace=k8s.io Nov 6 23:42:43.003487 containerd[1753]: time="2025-11-06T23:42:43.002982076Z" level=warning msg="cleaning up after shim disconnected" id=5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944 namespace=k8s.io Nov 6 23:42:43.003487 containerd[1753]: time="2025-11-06T23:42:43.002994476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:42:43.024660 containerd[1753]: time="2025-11-06T23:42:43.024610947Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:42:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:42:43.809374 containerd[1753]: time="2025-11-06T23:42:43.809326355Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:42:43.861622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944-rootfs.mount: Deactivated successfully. Nov 6 23:42:43.874332 containerd[1753]: time="2025-11-06T23:42:43.874237068Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\"" Nov 6 23:42:43.875030 containerd[1753]: time="2025-11-06T23:42:43.874903274Z" level=info msg="StartContainer for \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\"" Nov 6 23:42:43.911789 systemd[1]: run-containerd-runc-k8s.io-ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a-runc.fFvppS.mount: Deactivated successfully. Nov 6 23:42:43.920652 systemd[1]: Started cri-containerd-ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a.scope - libcontainer container ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a. Nov 6 23:42:43.953204 containerd[1753]: time="2025-11-06T23:42:43.951096776Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:43.953939 containerd[1753]: time="2025-11-06T23:42:43.953809198Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 23:42:43.957136 systemd[1]: cri-containerd-ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a.scope: Deactivated successfully. Nov 6 23:42:43.959453 containerd[1753]: time="2025-11-06T23:42:43.959422042Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:42:43.961168 containerd[1753]: time="2025-11-06T23:42:43.960776853Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.172165163s" Nov 6 23:42:43.961168 containerd[1753]: time="2025-11-06T23:42:43.960815253Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 23:42:43.961168 containerd[1753]: time="2025-11-06T23:42:43.961078755Z" level=info msg="StartContainer for \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\" returns successfully" Nov 6 23:42:43.972017 containerd[1753]: time="2025-11-06T23:42:43.971977741Z" level=info msg="CreateContainer within sandbox \"a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:42:43.992464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a-rootfs.mount: Deactivated successfully. Nov 6 23:42:44.495883 containerd[1753]: time="2025-11-06T23:42:44.495751085Z" level=info msg="CreateContainer within sandbox \"a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\"" Nov 6 23:42:44.496361 containerd[1753]: time="2025-11-06T23:42:44.496263589Z" level=info msg="shim disconnected" id=ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a namespace=k8s.io Nov 6 23:42:44.496361 containerd[1753]: time="2025-11-06T23:42:44.496339589Z" level=warning msg="cleaning up after shim disconnected" id=ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a namespace=k8s.io Nov 6 23:42:44.496361 containerd[1753]: time="2025-11-06T23:42:44.496352389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:42:44.498348 containerd[1753]: time="2025-11-06T23:42:44.496963394Z" level=info msg="StartContainer for \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\"" Nov 6 23:42:44.529470 systemd[1]: Started cri-containerd-397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c.scope - libcontainer container 397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c. Nov 6 23:42:44.563997 containerd[1753]: time="2025-11-06T23:42:44.563919424Z" level=info msg="StartContainer for \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\" returns successfully" Nov 6 23:42:44.820079 containerd[1753]: time="2025-11-06T23:42:44.819984249Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:42:44.852355 containerd[1753]: time="2025-11-06T23:42:44.852271405Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\"" Nov 6 23:42:44.854331 containerd[1753]: time="2025-11-06T23:42:44.853122412Z" level=info msg="StartContainer for \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\"" Nov 6 23:42:44.874220 kubelet[3367]: I1106 23:42:44.874157 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-bxr6b" podStartSLOduration=1.5870879439999999 podStartE2EDuration="13.874135878s" podCreationTimestamp="2025-11-06 23:42:31 +0000 UTC" firstStartedPulling="2025-11-06 23:42:31.677968052 +0000 UTC m=+6.939689838" lastFinishedPulling="2025-11-06 23:42:43.965016086 +0000 UTC m=+19.226737772" observedRunningTime="2025-11-06 23:42:44.835436772 +0000 UTC m=+20.097158458" watchObservedRunningTime="2025-11-06 23:42:44.874135878 +0000 UTC m=+20.135857664" Nov 6 23:42:44.913436 systemd[1]: run-containerd-runc-k8s.io-e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c-runc.fmzKXc.mount: Deactivated successfully. Nov 6 23:42:44.928233 systemd[1]: Started cri-containerd-e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c.scope - libcontainer container e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c. Nov 6 23:42:44.962785 containerd[1753]: time="2025-11-06T23:42:44.962736679Z" level=info msg="StartContainer for \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\" returns successfully" Nov 6 23:42:44.963571 systemd[1]: cri-containerd-e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c.scope: Deactivated successfully. Nov 6 23:42:45.013090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c-rootfs.mount: Deactivated successfully. Nov 6 23:42:45.019031 containerd[1753]: time="2025-11-06T23:42:45.018956623Z" level=info msg="shim disconnected" id=e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c namespace=k8s.io Nov 6 23:42:45.019031 containerd[1753]: time="2025-11-06T23:42:45.019031924Z" level=warning msg="cleaning up after shim disconnected" id=e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c namespace=k8s.io Nov 6 23:42:45.019244 containerd[1753]: time="2025-11-06T23:42:45.019043324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:42:45.050037 containerd[1753]: time="2025-11-06T23:42:45.049961069Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:42:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:42:45.824705 containerd[1753]: time="2025-11-06T23:42:45.824661597Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:42:45.863782 containerd[1753]: time="2025-11-06T23:42:45.863738506Z" level=info msg="CreateContainer within sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\"" Nov 6 23:42:45.864353 containerd[1753]: time="2025-11-06T23:42:45.864274510Z" level=info msg="StartContainer for \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\"" Nov 6 23:42:45.904510 systemd[1]: Started cri-containerd-060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a.scope - libcontainer container 060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a. Nov 6 23:42:45.937260 containerd[1753]: time="2025-11-06T23:42:45.937204287Z" level=info msg="StartContainer for \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\" returns successfully" Nov 6 23:42:46.136406 kubelet[3367]: I1106 23:42:46.135650 3367 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 23:42:46.207606 systemd[1]: Created slice kubepods-burstable-pod9b6f1534_aa1b_48be_9eab_cbdc8c66f6bf.slice - libcontainer container kubepods-burstable-pod9b6f1534_aa1b_48be_9eab_cbdc8c66f6bf.slice. Nov 6 23:42:46.224544 systemd[1]: Created slice kubepods-burstable-pod9b46dcd5_06c8_4d51_b844_0392630da60f.slice - libcontainer container kubepods-burstable-pod9b46dcd5_06c8_4d51_b844_0392630da60f.slice. Nov 6 23:42:46.239448 kubelet[3367]: I1106 23:42:46.239222 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b46dcd5-06c8-4d51-b844-0392630da60f-config-volume\") pod \"coredns-66bc5c9577-dz5cb\" (UID: \"9b46dcd5-06c8-4d51-b844-0392630da60f\") " pod="kube-system/coredns-66bc5c9577-dz5cb" Nov 6 23:42:46.239448 kubelet[3367]: I1106 23:42:46.239280 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzgmw\" (UniqueName: \"kubernetes.io/projected/9b46dcd5-06c8-4d51-b844-0392630da60f-kube-api-access-bzgmw\") pod \"coredns-66bc5c9577-dz5cb\" (UID: \"9b46dcd5-06c8-4d51-b844-0392630da60f\") " pod="kube-system/coredns-66bc5c9577-dz5cb" Nov 6 23:42:46.239448 kubelet[3367]: I1106 23:42:46.239377 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b6f1534-aa1b-48be-9eab-cbdc8c66f6bf-config-volume\") pod \"coredns-66bc5c9577-8hvfq\" (UID: \"9b6f1534-aa1b-48be-9eab-cbdc8c66f6bf\") " pod="kube-system/coredns-66bc5c9577-8hvfq" Nov 6 23:42:46.239448 kubelet[3367]: I1106 23:42:46.239404 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z4cm\" (UniqueName: \"kubernetes.io/projected/9b6f1534-aa1b-48be-9eab-cbdc8c66f6bf-kube-api-access-8z4cm\") pod \"coredns-66bc5c9577-8hvfq\" (UID: \"9b6f1534-aa1b-48be-9eab-cbdc8c66f6bf\") " pod="kube-system/coredns-66bc5c9577-8hvfq" Nov 6 23:42:46.524334 containerd[1753]: time="2025-11-06T23:42:46.524211731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8hvfq,Uid:9b6f1534-aa1b-48be-9eab-cbdc8c66f6bf,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:46.540565 containerd[1753]: time="2025-11-06T23:42:46.540513160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dz5cb,Uid:9b46dcd5-06c8-4d51-b844-0392630da60f,Namespace:kube-system,Attempt:0,}" Nov 6 23:42:46.870118 systemd[1]: run-containerd-runc-k8s.io-060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a-runc.6UliLh.mount: Deactivated successfully. Nov 6 23:42:48.276773 systemd-networkd[1344]: cilium_host: Link UP Nov 6 23:42:48.276951 systemd-networkd[1344]: cilium_net: Link UP Nov 6 23:42:48.276955 systemd-networkd[1344]: cilium_net: Gained carrier Nov 6 23:42:48.277191 systemd-networkd[1344]: cilium_host: Gained carrier Nov 6 23:42:48.502202 systemd-networkd[1344]: cilium_vxlan: Link UP Nov 6 23:42:48.502218 systemd-networkd[1344]: cilium_vxlan: Gained carrier Nov 6 23:42:48.746962 systemd-networkd[1344]: cilium_host: Gained IPv6LL Nov 6 23:42:48.810363 kernel: NET: Registered PF_ALG protocol family Nov 6 23:42:49.170586 systemd-networkd[1344]: cilium_net: Gained IPv6LL Nov 6 23:42:49.578021 systemd-networkd[1344]: lxc_health: Link UP Nov 6 23:42:49.580446 systemd-networkd[1344]: lxc_health: Gained carrier Nov 6 23:42:49.874571 systemd-networkd[1344]: cilium_vxlan: Gained IPv6LL Nov 6 23:42:50.122601 kernel: eth0: renamed from tmpcf0dc Nov 6 23:42:50.129160 systemd-networkd[1344]: lxcb0ff1ca9fd93: Link UP Nov 6 23:42:50.139773 systemd-networkd[1344]: lxcb0ff1ca9fd93: Gained carrier Nov 6 23:42:50.166153 systemd-networkd[1344]: lxc40065ed351ee: Link UP Nov 6 23:42:50.166348 kernel: eth0: renamed from tmp5df66 Nov 6 23:42:50.178086 systemd-networkd[1344]: lxc40065ed351ee: Gained carrier Nov 6 23:42:51.392329 kubelet[3367]: I1106 23:42:51.391341 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xh7pl" podStartSLOduration=13.086629022 podStartE2EDuration="20.391318844s" podCreationTimestamp="2025-11-06 23:42:31 +0000 UTC" firstStartedPulling="2025-11-06 23:42:31.482541557 +0000 UTC m=+6.744263243" lastFinishedPulling="2025-11-06 23:42:38.787231279 +0000 UTC m=+14.048953065" observedRunningTime="2025-11-06 23:42:46.836682902 +0000 UTC m=+22.098404688" watchObservedRunningTime="2025-11-06 23:42:51.391318844 +0000 UTC m=+26.653040630" Nov 6 23:42:51.538590 systemd-networkd[1344]: lxc_health: Gained IPv6LL Nov 6 23:42:51.602685 systemd-networkd[1344]: lxcb0ff1ca9fd93: Gained IPv6LL Nov 6 23:42:52.178586 systemd-networkd[1344]: lxc40065ed351ee: Gained IPv6LL Nov 6 23:42:53.840336 containerd[1753]: time="2025-11-06T23:42:53.838096278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:53.840336 containerd[1753]: time="2025-11-06T23:42:53.838194079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:53.840336 containerd[1753]: time="2025-11-06T23:42:53.838215079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:53.840336 containerd[1753]: time="2025-11-06T23:42:53.838348580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:53.872867 systemd[1]: Started cri-containerd-cf0dcd55029aaa06dac2256044f6f5ee20919305258de5788044b92f09306c20.scope - libcontainer container cf0dcd55029aaa06dac2256044f6f5ee20919305258de5788044b92f09306c20. Nov 6 23:42:53.874363 containerd[1753]: time="2025-11-06T23:42:53.872358550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:42:53.874363 containerd[1753]: time="2025-11-06T23:42:53.872446951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:42:53.874363 containerd[1753]: time="2025-11-06T23:42:53.872471751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:53.875643 containerd[1753]: time="2025-11-06T23:42:53.875336874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:42:53.915491 systemd[1]: Started cri-containerd-5df66ecb4493707fcde2062e98d85f2b9f9fc9fae9f529c891099acc654b786d.scope - libcontainer container 5df66ecb4493707fcde2062e98d85f2b9f9fc9fae9f529c891099acc654b786d. Nov 6 23:42:53.969801 containerd[1753]: time="2025-11-06T23:42:53.969755724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8hvfq,Uid:9b6f1534-aa1b-48be-9eab-cbdc8c66f6bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf0dcd55029aaa06dac2256044f6f5ee20919305258de5788044b92f09306c20\"" Nov 6 23:42:53.979839 containerd[1753]: time="2025-11-06T23:42:53.979785603Z" level=info msg="CreateContainer within sandbox \"cf0dcd55029aaa06dac2256044f6f5ee20919305258de5788044b92f09306c20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:42:54.019078 containerd[1753]: time="2025-11-06T23:42:54.018553211Z" level=info msg="CreateContainer within sandbox \"cf0dcd55029aaa06dac2256044f6f5ee20919305258de5788044b92f09306c20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57b4ef5ccbc93cad88e0465708a4a329a29492fff181b4f30075c2934fadb6f4\"" Nov 6 23:42:54.023475 containerd[1753]: time="2025-11-06T23:42:54.023431550Z" level=info msg="StartContainer for \"57b4ef5ccbc93cad88e0465708a4a329a29492fff181b4f30075c2934fadb6f4\"" Nov 6 23:42:54.031355 containerd[1753]: time="2025-11-06T23:42:54.031135111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dz5cb,Uid:9b46dcd5-06c8-4d51-b844-0392630da60f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5df66ecb4493707fcde2062e98d85f2b9f9fc9fae9f529c891099acc654b786d\"" Nov 6 23:42:54.043541 containerd[1753]: time="2025-11-06T23:42:54.042814704Z" level=info msg="CreateContainer within sandbox \"5df66ecb4493707fcde2062e98d85f2b9f9fc9fae9f529c891099acc654b786d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:42:54.082493 systemd[1]: Started cri-containerd-57b4ef5ccbc93cad88e0465708a4a329a29492fff181b4f30075c2934fadb6f4.scope - libcontainer container 57b4ef5ccbc93cad88e0465708a4a329a29492fff181b4f30075c2934fadb6f4. Nov 6 23:42:54.083982 containerd[1753]: time="2025-11-06T23:42:54.083872930Z" level=info msg="CreateContainer within sandbox \"5df66ecb4493707fcde2062e98d85f2b9f9fc9fae9f529c891099acc654b786d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f5eda787031e2e2bca795acbb835aa7629778821f4ddb0fb84fcc59063551c4\"" Nov 6 23:42:54.084813 containerd[1753]: time="2025-11-06T23:42:54.084777437Z" level=info msg="StartContainer for \"4f5eda787031e2e2bca795acbb835aa7629778821f4ddb0fb84fcc59063551c4\"" Nov 6 23:42:54.129549 systemd[1]: Started cri-containerd-4f5eda787031e2e2bca795acbb835aa7629778821f4ddb0fb84fcc59063551c4.scope - libcontainer container 4f5eda787031e2e2bca795acbb835aa7629778821f4ddb0fb84fcc59063551c4. Nov 6 23:42:54.131960 containerd[1753]: time="2025-11-06T23:42:54.131858711Z" level=info msg="StartContainer for \"57b4ef5ccbc93cad88e0465708a4a329a29492fff181b4f30075c2934fadb6f4\" returns successfully" Nov 6 23:42:54.171916 containerd[1753]: time="2025-11-06T23:42:54.171876029Z" level=info msg="StartContainer for \"4f5eda787031e2e2bca795acbb835aa7629778821f4ddb0fb84fcc59063551c4\" returns successfully" Nov 6 23:42:54.864100 kubelet[3367]: I1106 23:42:54.862909 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dz5cb" podStartSLOduration=23.862892318 podStartE2EDuration="23.862892318s" podCreationTimestamp="2025-11-06 23:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:42:54.861648308 +0000 UTC m=+30.123369994" watchObservedRunningTime="2025-11-06 23:42:54.862892318 +0000 UTC m=+30.124614104" Nov 6 23:42:54.884570 kubelet[3367]: I1106 23:42:54.884504 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8hvfq" podStartSLOduration=23.884484089 podStartE2EDuration="23.884484089s" podCreationTimestamp="2025-11-06 23:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:42:54.882603174 +0000 UTC m=+30.144324860" watchObservedRunningTime="2025-11-06 23:42:54.884484089 +0000 UTC m=+30.146205775" Nov 6 23:43:58.870999 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:56612.service - OpenSSH per-connection server daemon (10.200.16.10:56612). Nov 6 23:43:59.497215 sshd[4760]: Accepted publickey for core from 10.200.16.10 port 56612 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:43:59.498723 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:43:59.503950 systemd-logind[1729]: New session 10 of user core. Nov 6 23:43:59.512456 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:44:00.014145 sshd[4762]: Connection closed by 10.200.16.10 port 56612 Nov 6 23:44:00.015937 sshd-session[4760]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:00.019749 systemd-logind[1729]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:44:00.020695 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:56612.service: Deactivated successfully. Nov 6 23:44:00.023074 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:44:00.024445 systemd-logind[1729]: Removed session 10. Nov 6 23:44:05.130624 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:56274.service - OpenSSH per-connection server daemon (10.200.16.10:56274). Nov 6 23:44:05.757940 sshd[4777]: Accepted publickey for core from 10.200.16.10 port 56274 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:05.759390 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:05.764411 systemd-logind[1729]: New session 11 of user core. Nov 6 23:44:05.769490 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:44:06.276855 sshd[4779]: Connection closed by 10.200.16.10 port 56274 Nov 6 23:44:06.277599 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:06.280435 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:56274.service: Deactivated successfully. Nov 6 23:44:06.283146 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:44:06.284706 systemd-logind[1729]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:44:06.285894 systemd-logind[1729]: Removed session 11. Nov 6 23:44:11.391595 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:46360.service - OpenSSH per-connection server daemon (10.200.16.10:46360). Nov 6 23:44:12.016632 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 46360 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:12.018076 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:12.023285 systemd-logind[1729]: New session 12 of user core. Nov 6 23:44:12.027486 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:44:12.521634 sshd[4794]: Connection closed by 10.200.16.10 port 46360 Nov 6 23:44:12.522587 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:12.526442 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:46360.service: Deactivated successfully. Nov 6 23:44:12.528769 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:44:12.529977 systemd-logind[1729]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:44:12.530987 systemd-logind[1729]: Removed session 12. Nov 6 23:44:17.635499 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:46368.service - OpenSSH per-connection server daemon (10.200.16.10:46368). Nov 6 23:44:18.269592 sshd[4806]: Accepted publickey for core from 10.200.16.10 port 46368 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:18.270744 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:18.275873 systemd-logind[1729]: New session 13 of user core. Nov 6 23:44:18.285475 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:44:18.802054 sshd[4808]: Connection closed by 10.200.16.10 port 46368 Nov 6 23:44:18.802821 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:18.806593 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:46368.service: Deactivated successfully. Nov 6 23:44:18.808929 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:44:18.809984 systemd-logind[1729]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:44:18.811029 systemd-logind[1729]: Removed session 13. Nov 6 23:44:23.918610 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:40616.service - OpenSSH per-connection server daemon (10.200.16.10:40616). Nov 6 23:44:24.549089 sshd[4821]: Accepted publickey for core from 10.200.16.10 port 40616 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:24.550516 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:24.554954 systemd-logind[1729]: New session 14 of user core. Nov 6 23:44:24.560450 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:44:25.049971 sshd[4824]: Connection closed by 10.200.16.10 port 40616 Nov 6 23:44:25.050697 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:25.054398 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:40616.service: Deactivated successfully. Nov 6 23:44:25.056513 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:44:25.057546 systemd-logind[1729]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:44:25.058662 systemd-logind[1729]: Removed session 14. Nov 6 23:44:25.165738 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:40628.service - OpenSSH per-connection server daemon (10.200.16.10:40628). Nov 6 23:44:25.962905 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 40628 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:25.964356 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:25.969453 systemd-logind[1729]: New session 15 of user core. Nov 6 23:44:25.974466 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:44:26.522047 sshd[4841]: Connection closed by 10.200.16.10 port 40628 Nov 6 23:44:26.523526 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:26.526285 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:40628.service: Deactivated successfully. Nov 6 23:44:26.528742 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:44:26.530383 systemd-logind[1729]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:44:26.532436 systemd-logind[1729]: Removed session 15. Nov 6 23:44:26.634555 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:40634.service - OpenSSH per-connection server daemon (10.200.16.10:40634). Nov 6 23:44:27.265555 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 40634 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:27.266984 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:27.271396 systemd-logind[1729]: New session 16 of user core. Nov 6 23:44:27.279456 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:44:27.784580 sshd[4853]: Connection closed by 10.200.16.10 port 40634 Nov 6 23:44:27.785294 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:27.788216 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:40634.service: Deactivated successfully. Nov 6 23:44:27.790443 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:44:27.792112 systemd-logind[1729]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:44:27.793440 systemd-logind[1729]: Removed session 16. Nov 6 23:44:32.907674 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:44372.service - OpenSSH per-connection server daemon (10.200.16.10:44372). Nov 6 23:44:33.532585 sshd[4867]: Accepted publickey for core from 10.200.16.10 port 44372 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:33.534025 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:33.538435 systemd-logind[1729]: New session 17 of user core. Nov 6 23:44:33.543503 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:44:34.059462 sshd[4869]: Connection closed by 10.200.16.10 port 44372 Nov 6 23:44:34.060196 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:34.064253 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:44372.service: Deactivated successfully. Nov 6 23:44:34.066665 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:44:34.067845 systemd-logind[1729]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:44:34.068850 systemd-logind[1729]: Removed session 17. Nov 6 23:44:34.175634 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:44376.service - OpenSSH per-connection server daemon (10.200.16.10:44376). Nov 6 23:44:34.802468 sshd[4881]: Accepted publickey for core from 10.200.16.10 port 44376 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:34.803927 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:34.809647 systemd-logind[1729]: New session 18 of user core. Nov 6 23:44:34.815485 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:44:35.385683 sshd[4883]: Connection closed by 10.200.16.10 port 44376 Nov 6 23:44:35.386529 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:35.390379 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:44376.service: Deactivated successfully. Nov 6 23:44:35.392473 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:44:35.393313 systemd-logind[1729]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:44:35.394363 systemd-logind[1729]: Removed session 18. Nov 6 23:44:35.497756 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:44384.service - OpenSSH per-connection server daemon (10.200.16.10:44384). Nov 6 23:44:36.126272 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 44384 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:36.126906 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:36.132168 systemd-logind[1729]: New session 19 of user core. Nov 6 23:44:36.140470 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:44:37.001821 sshd[4894]: Connection closed by 10.200.16.10 port 44384 Nov 6 23:44:37.002579 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:37.005541 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:44384.service: Deactivated successfully. Nov 6 23:44:37.007943 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:44:37.011819 systemd-logind[1729]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:44:37.012930 systemd-logind[1729]: Removed session 19. Nov 6 23:44:37.122422 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:44386.service - OpenSSH per-connection server daemon (10.200.16.10:44386). Nov 6 23:44:37.752699 sshd[4910]: Accepted publickey for core from 10.200.16.10 port 44386 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:37.754105 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:37.759363 systemd-logind[1729]: New session 20 of user core. Nov 6 23:44:37.764486 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:44:38.375849 sshd[4912]: Connection closed by 10.200.16.10 port 44386 Nov 6 23:44:38.376593 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:38.380569 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:44386.service: Deactivated successfully. Nov 6 23:44:38.382627 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:44:38.383728 systemd-logind[1729]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:44:38.385045 systemd-logind[1729]: Removed session 20. Nov 6 23:44:38.495597 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:44402.service - OpenSSH per-connection server daemon (10.200.16.10:44402). Nov 6 23:44:39.123352 sshd[4924]: Accepted publickey for core from 10.200.16.10 port 44402 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:39.124898 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:39.130347 systemd-logind[1729]: New session 21 of user core. Nov 6 23:44:39.135452 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:44:39.625540 sshd[4926]: Connection closed by 10.200.16.10 port 44402 Nov 6 23:44:39.626239 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:39.629241 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:44402.service: Deactivated successfully. Nov 6 23:44:39.631634 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:44:39.633160 systemd-logind[1729]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:44:39.634268 systemd-logind[1729]: Removed session 21. Nov 6 23:44:44.751653 systemd[1]: Started sshd@19-10.200.8.12:22-10.200.16.10:52356.service - OpenSSH per-connection server daemon (10.200.16.10:52356). Nov 6 23:44:45.379632 sshd[4940]: Accepted publickey for core from 10.200.16.10 port 52356 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:45.381066 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:45.385286 systemd-logind[1729]: New session 22 of user core. Nov 6 23:44:45.393470 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:44:45.902201 sshd[4942]: Connection closed by 10.200.16.10 port 52356 Nov 6 23:44:45.902945 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:45.906852 systemd-logind[1729]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:44:45.907598 systemd[1]: sshd@19-10.200.8.12:22-10.200.16.10:52356.service: Deactivated successfully. Nov 6 23:44:45.910103 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:44:45.911133 systemd-logind[1729]: Removed session 22. Nov 6 23:44:51.021042 systemd[1]: Started sshd@20-10.200.8.12:22-10.200.16.10:50378.service - OpenSSH per-connection server daemon (10.200.16.10:50378). Nov 6 23:44:51.646402 sshd[4954]: Accepted publickey for core from 10.200.16.10 port 50378 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:51.647820 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:51.652885 systemd-logind[1729]: New session 23 of user core. Nov 6 23:44:51.656462 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:44:52.160634 sshd[4956]: Connection closed by 10.200.16.10 port 50378 Nov 6 23:44:52.161392 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:52.165448 systemd-logind[1729]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:44:52.166337 systemd[1]: sshd@20-10.200.8.12:22-10.200.16.10:50378.service: Deactivated successfully. Nov 6 23:44:52.168851 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:44:52.169890 systemd-logind[1729]: Removed session 23. Nov 6 23:44:52.278640 systemd[1]: Started sshd@21-10.200.8.12:22-10.200.16.10:50392.service - OpenSSH per-connection server daemon (10.200.16.10:50392). Nov 6 23:44:52.911506 sshd[4968]: Accepted publickey for core from 10.200.16.10 port 50392 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:52.912977 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:52.917426 systemd-logind[1729]: New session 24 of user core. Nov 6 23:44:52.929470 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 23:44:54.569203 systemd[1]: run-containerd-runc-k8s.io-060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a-runc.EtyWAK.mount: Deactivated successfully. Nov 6 23:44:54.585672 containerd[1753]: time="2025-11-06T23:44:54.585469937Z" level=info msg="StopContainer for \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\" with timeout 30 (s)" Nov 6 23:44:54.586579 containerd[1753]: time="2025-11-06T23:44:54.586529643Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:44:54.587515 containerd[1753]: time="2025-11-06T23:44:54.587409849Z" level=info msg="Stop container \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\" with signal terminated" Nov 6 23:44:54.603600 containerd[1753]: time="2025-11-06T23:44:54.603537847Z" level=info msg="StopContainer for \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\" with timeout 2 (s)" Nov 6 23:44:54.604477 containerd[1753]: time="2025-11-06T23:44:54.604259452Z" level=info msg="Stop container \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\" with signal terminated" Nov 6 23:44:54.619559 systemd-networkd[1344]: lxc_health: Link DOWN Nov 6 23:44:54.619568 systemd-networkd[1344]: lxc_health: Lost carrier Nov 6 23:44:54.625519 systemd[1]: cri-containerd-397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c.scope: Deactivated successfully. Nov 6 23:44:54.660668 systemd[1]: cri-containerd-060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a.scope: Deactivated successfully. Nov 6 23:44:54.661050 systemd[1]: cri-containerd-060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a.scope: Consumed 7.107s CPU time, 122.8M memory peak, 144K read from disk, 13.3M written to disk. Nov 6 23:44:54.677861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c-rootfs.mount: Deactivated successfully. Nov 6 23:44:54.691146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a-rootfs.mount: Deactivated successfully. Nov 6 23:44:54.752632 containerd[1753]: time="2025-11-06T23:44:54.752558857Z" level=info msg="shim disconnected" id=397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c namespace=k8s.io Nov 6 23:44:54.752943 containerd[1753]: time="2025-11-06T23:44:54.752918659Z" level=warning msg="cleaning up after shim disconnected" id=397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c namespace=k8s.io Nov 6 23:44:54.753220 containerd[1753]: time="2025-11-06T23:44:54.753024360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:44:54.753220 containerd[1753]: time="2025-11-06T23:44:54.753068460Z" level=info msg="shim disconnected" id=060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a namespace=k8s.io Nov 6 23:44:54.753220 containerd[1753]: time="2025-11-06T23:44:54.753119261Z" level=warning msg="cleaning up after shim disconnected" id=060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a namespace=k8s.io Nov 6 23:44:54.753220 containerd[1753]: time="2025-11-06T23:44:54.753131561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:44:54.780044 containerd[1753]: time="2025-11-06T23:44:54.779981725Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:44:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:44:54.784038 containerd[1753]: time="2025-11-06T23:44:54.783999749Z" level=info msg="StopContainer for \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\" returns successfully" Nov 6 23:44:54.784830 containerd[1753]: time="2025-11-06T23:44:54.784799054Z" level=info msg="StopPodSandbox for \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\"" Nov 6 23:44:54.785932 containerd[1753]: time="2025-11-06T23:44:54.784841454Z" level=info msg="Container to stop \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:44:54.785932 containerd[1753]: time="2025-11-06T23:44:54.784884754Z" level=info msg="Container to stop \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:44:54.785932 containerd[1753]: time="2025-11-06T23:44:54.784898255Z" level=info msg="Container to stop \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:44:54.785932 containerd[1753]: time="2025-11-06T23:44:54.784908855Z" level=info msg="Container to stop \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:44:54.785932 containerd[1753]: time="2025-11-06T23:44:54.784920155Z" level=info msg="Container to stop \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:44:54.787877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3-shm.mount: Deactivated successfully. Nov 6 23:44:54.791321 containerd[1753]: time="2025-11-06T23:44:54.790412588Z" level=info msg="StopContainer for \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\" returns successfully" Nov 6 23:44:54.791321 containerd[1753]: time="2025-11-06T23:44:54.791085692Z" level=info msg="StopPodSandbox for \"a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7\"" Nov 6 23:44:54.791321 containerd[1753]: time="2025-11-06T23:44:54.791129693Z" level=info msg="Container to stop \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:44:54.801062 systemd[1]: cri-containerd-5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3.scope: Deactivated successfully. Nov 6 23:44:54.804936 systemd[1]: cri-containerd-a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7.scope: Deactivated successfully. Nov 6 23:44:54.847192 containerd[1753]: time="2025-11-06T23:44:54.846164129Z" level=info msg="shim disconnected" id=5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3 namespace=k8s.io Nov 6 23:44:54.847192 containerd[1753]: time="2025-11-06T23:44:54.846236629Z" level=warning msg="cleaning up after shim disconnected" id=5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3 namespace=k8s.io Nov 6 23:44:54.847192 containerd[1753]: time="2025-11-06T23:44:54.846248829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:44:54.848684 containerd[1753]: time="2025-11-06T23:44:54.848390242Z" level=info msg="shim disconnected" id=a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7 namespace=k8s.io Nov 6 23:44:54.848684 containerd[1753]: time="2025-11-06T23:44:54.848447243Z" level=warning msg="cleaning up after shim disconnected" id=a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7 namespace=k8s.io Nov 6 23:44:54.848684 containerd[1753]: time="2025-11-06T23:44:54.848458843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:44:54.874821 containerd[1753]: time="2025-11-06T23:44:54.874753703Z" level=info msg="TearDown network for sandbox \"a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7\" successfully" Nov 6 23:44:54.874821 containerd[1753]: time="2025-11-06T23:44:54.874809003Z" level=info msg="StopPodSandbox for \"a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7\" returns successfully" Nov 6 23:44:54.875017 containerd[1753]: time="2025-11-06T23:44:54.874778203Z" level=info msg="TearDown network for sandbox \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" successfully" Nov 6 23:44:54.875017 containerd[1753]: time="2025-11-06T23:44:54.874955504Z" level=info msg="StopPodSandbox for \"5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3\" returns successfully" Nov 6 23:44:54.946195 kubelet[3367]: I1106 23:44:54.946093 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-run\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946195 kubelet[3367]: I1106 23:44:54.946141 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-etc-cni-netd\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946195 kubelet[3367]: I1106 23:44:54.946173 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2t24\" (UniqueName: \"kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-kube-api-access-p2t24\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946195 kubelet[3367]: I1106 23:44:54.946195 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-kernel\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946886 kubelet[3367]: I1106 23:44:54.946222 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt6qc\" (UniqueName: \"kubernetes.io/projected/a67a777f-7cf7-4b73-b036-1c1df94639f9-kube-api-access-xt6qc\") pod \"a67a777f-7cf7-4b73-b036-1c1df94639f9\" (UID: \"a67a777f-7cf7-4b73-b036-1c1df94639f9\") " Nov 6 23:44:54.946886 kubelet[3367]: I1106 23:44:54.946246 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-hubble-tls\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946886 kubelet[3367]: I1106 23:44:54.946265 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-bpf-maps\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946886 kubelet[3367]: I1106 23:44:54.946287 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-xtables-lock\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946886 kubelet[3367]: I1106 23:44:54.946326 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-cgroup\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.946886 kubelet[3367]: I1106 23:44:54.946351 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67a777f-7cf7-4b73-b036-1c1df94639f9-cilium-config-path\") pod \"a67a777f-7cf7-4b73-b036-1c1df94639f9\" (UID: \"a67a777f-7cf7-4b73-b036-1c1df94639f9\") " Nov 6 23:44:54.947133 kubelet[3367]: I1106 23:44:54.946418 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-hostproc\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.947133 kubelet[3367]: I1106 23:44:54.946444 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4fa8983e-5240-4db8-ae67-f06b36071332-clustermesh-secrets\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.947133 kubelet[3367]: I1106 23:44:54.946462 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-net\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.947133 kubelet[3367]: I1106 23:44:54.946484 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cni-path\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.947133 kubelet[3367]: I1106 23:44:54.946504 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-lib-modules\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.947133 kubelet[3367]: I1106 23:44:54.946530 3367 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-config-path\") pod \"4fa8983e-5240-4db8-ae67-f06b36071332\" (UID: \"4fa8983e-5240-4db8-ae67-f06b36071332\") " Nov 6 23:44:54.948326 kubelet[3367]: I1106 23:44:54.947903 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.948326 kubelet[3367]: I1106 23:44:54.948067 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.948326 kubelet[3367]: I1106 23:44:54.948123 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.949514 kubelet[3367]: I1106 23:44:54.949397 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.949514 kubelet[3367]: I1106 23:44:54.949422 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-hostproc" (OuterVolumeSpecName: "hostproc") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.950666 kubelet[3367]: I1106 23:44:54.950619 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.951819 kubelet[3367]: I1106 23:44:54.951666 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.951819 kubelet[3367]: I1106 23:44:54.951707 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cni-path" (OuterVolumeSpecName: "cni-path") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.951819 kubelet[3367]: I1106 23:44:54.951727 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.952825 kubelet[3367]: I1106 23:44:54.952714 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:44:54.956942 kubelet[3367]: I1106 23:44:54.956913 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67a777f-7cf7-4b73-b036-1c1df94639f9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a67a777f-7cf7-4b73-b036-1c1df94639f9" (UID: "a67a777f-7cf7-4b73-b036-1c1df94639f9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:44:54.957875 kubelet[3367]: I1106 23:44:54.957470 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-kube-api-access-p2t24" (OuterVolumeSpecName: "kube-api-access-p2t24") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "kube-api-access-p2t24". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:44:54.958049 kubelet[3367]: I1106 23:44:54.958028 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa8983e-5240-4db8-ae67-f06b36071332-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:44:54.958200 kubelet[3367]: I1106 23:44:54.958113 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:44:54.958672 kubelet[3367]: I1106 23:44:54.958647 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4fa8983e-5240-4db8-ae67-f06b36071332" (UID: "4fa8983e-5240-4db8-ae67-f06b36071332"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:44:54.958973 kubelet[3367]: I1106 23:44:54.958947 3367 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67a777f-7cf7-4b73-b036-1c1df94639f9-kube-api-access-xt6qc" (OuterVolumeSpecName: "kube-api-access-xt6qc") pod "a67a777f-7cf7-4b73-b036-1c1df94639f9" (UID: "a67a777f-7cf7-4b73-b036-1c1df94639f9"). InnerVolumeSpecName "kube-api-access-xt6qc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:44:55.047459 kubelet[3367]: I1106 23:44:55.047411 3367 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-hubble-tls\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047459 kubelet[3367]: I1106 23:44:55.047452 3367 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-bpf-maps\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047459 kubelet[3367]: I1106 23:44:55.047463 3367 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-xtables-lock\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047459 kubelet[3367]: I1106 23:44:55.047473 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-cgroup\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047488 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67a777f-7cf7-4b73-b036-1c1df94639f9-cilium-config-path\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047501 3367 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-hostproc\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047514 3367 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4fa8983e-5240-4db8-ae67-f06b36071332-clustermesh-secrets\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047526 3367 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-net\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047535 3367 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cni-path\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047592 3367 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-lib-modules\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047603 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-config-path\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047776 kubelet[3367]: I1106 23:44:55.047614 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-cilium-run\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047959 kubelet[3367]: I1106 23:44:55.047625 3367 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-etc-cni-netd\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047959 kubelet[3367]: I1106 23:44:55.047635 3367 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p2t24\" (UniqueName: \"kubernetes.io/projected/4fa8983e-5240-4db8-ae67-f06b36071332-kube-api-access-p2t24\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047959 kubelet[3367]: I1106 23:44:55.047646 3367 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4fa8983e-5240-4db8-ae67-f06b36071332-host-proc-sys-kernel\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.047959 kubelet[3367]: I1106 23:44:55.047659 3367 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xt6qc\" (UniqueName: \"kubernetes.io/projected/a67a777f-7cf7-4b73-b036-1c1df94639f9-kube-api-access-xt6qc\") on node \"ci-4230.2.4-n-c920fca088\" DevicePath \"\"" Nov 6 23:44:55.088234 kubelet[3367]: I1106 23:44:55.085646 3367 scope.go:117] "RemoveContainer" containerID="397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c" Nov 6 23:44:55.088429 containerd[1753]: time="2025-11-06T23:44:55.087914305Z" level=info msg="RemoveContainer for \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\"" Nov 6 23:44:55.093204 systemd[1]: Removed slice kubepods-besteffort-poda67a777f_7cf7_4b73_b036_1c1df94639f9.slice - libcontainer container kubepods-besteffort-poda67a777f_7cf7_4b73_b036_1c1df94639f9.slice. Nov 6 23:44:55.103165 containerd[1753]: time="2025-11-06T23:44:55.102994497Z" level=info msg="RemoveContainer for \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\" returns successfully" Nov 6 23:44:55.105560 systemd[1]: Removed slice kubepods-burstable-pod4fa8983e_5240_4db8_ae67_f06b36071332.slice - libcontainer container kubepods-burstable-pod4fa8983e_5240_4db8_ae67_f06b36071332.slice. Nov 6 23:44:55.105921 systemd[1]: kubepods-burstable-pod4fa8983e_5240_4db8_ae67_f06b36071332.slice: Consumed 7.191s CPU time, 123.2M memory peak, 144K read from disk, 13.3M written to disk. Nov 6 23:44:55.106763 kubelet[3367]: I1106 23:44:55.106650 3367 scope.go:117] "RemoveContainer" containerID="397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c" Nov 6 23:44:55.108293 containerd[1753]: time="2025-11-06T23:44:55.108248229Z" level=error msg="ContainerStatus for \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\": not found" Nov 6 23:44:55.108733 kubelet[3367]: E1106 23:44:55.108587 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\": not found" containerID="397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c" Nov 6 23:44:55.108733 kubelet[3367]: I1106 23:44:55.108620 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c"} err="failed to get container status \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\": rpc error: code = NotFound desc = an error occurred when try to find container \"397137e70cb289ada9cd0ec95d15adec6f487d80583209e32027942f71f5662c\": not found" Nov 6 23:44:55.108733 kubelet[3367]: I1106 23:44:55.108665 3367 scope.go:117] "RemoveContainer" containerID="060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a" Nov 6 23:44:55.110114 containerd[1753]: time="2025-11-06T23:44:55.110089840Z" level=info msg="RemoveContainer for \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\"" Nov 6 23:44:55.117351 containerd[1753]: time="2025-11-06T23:44:55.117267484Z" level=info msg="RemoveContainer for \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\" returns successfully" Nov 6 23:44:55.117680 kubelet[3367]: I1106 23:44:55.117652 3367 scope.go:117] "RemoveContainer" containerID="e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c" Nov 6 23:44:55.119921 containerd[1753]: time="2025-11-06T23:44:55.119523998Z" level=info msg="RemoveContainer for \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\"" Nov 6 23:44:55.128627 containerd[1753]: time="2025-11-06T23:44:55.128579953Z" level=info msg="RemoveContainer for \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\" returns successfully" Nov 6 23:44:55.128889 kubelet[3367]: I1106 23:44:55.128864 3367 scope.go:117] "RemoveContainer" containerID="ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a" Nov 6 23:44:55.129985 containerd[1753]: time="2025-11-06T23:44:55.129869461Z" level=info msg="RemoveContainer for \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\"" Nov 6 23:44:55.140231 containerd[1753]: time="2025-11-06T23:44:55.140190424Z" level=info msg="RemoveContainer for \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\" returns successfully" Nov 6 23:44:55.140437 kubelet[3367]: I1106 23:44:55.140415 3367 scope.go:117] "RemoveContainer" containerID="5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944" Nov 6 23:44:55.141728 containerd[1753]: time="2025-11-06T23:44:55.141481332Z" level=info msg="RemoveContainer for \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\"" Nov 6 23:44:55.148034 containerd[1753]: time="2025-11-06T23:44:55.147958871Z" level=info msg="RemoveContainer for \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\" returns successfully" Nov 6 23:44:55.148250 kubelet[3367]: I1106 23:44:55.148225 3367 scope.go:117] "RemoveContainer" containerID="709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3" Nov 6 23:44:55.149438 containerd[1753]: time="2025-11-06T23:44:55.149353880Z" level=info msg="RemoveContainer for \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\"" Nov 6 23:44:55.155352 containerd[1753]: time="2025-11-06T23:44:55.155288916Z" level=info msg="RemoveContainer for \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\" returns successfully" Nov 6 23:44:55.155527 kubelet[3367]: I1106 23:44:55.155501 3367 scope.go:117] "RemoveContainer" containerID="060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a" Nov 6 23:44:55.155757 containerd[1753]: time="2025-11-06T23:44:55.155724319Z" level=error msg="ContainerStatus for \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\": not found" Nov 6 23:44:55.155922 kubelet[3367]: E1106 23:44:55.155886 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\": not found" containerID="060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a" Nov 6 23:44:55.156019 kubelet[3367]: I1106 23:44:55.155917 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a"} err="failed to get container status \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\": rpc error: code = NotFound desc = an error occurred when try to find container \"060d920e25ddbead7247197cbf2c4051dc0e47d282ce48f77a360458fa48511a\": not found" Nov 6 23:44:55.156019 kubelet[3367]: I1106 23:44:55.156014 3367 scope.go:117] "RemoveContainer" containerID="e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c" Nov 6 23:44:55.156258 containerd[1753]: time="2025-11-06T23:44:55.156230122Z" level=error msg="ContainerStatus for \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\": not found" Nov 6 23:44:55.156398 kubelet[3367]: E1106 23:44:55.156373 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\": not found" containerID="e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c" Nov 6 23:44:55.156454 kubelet[3367]: I1106 23:44:55.156403 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c"} err="failed to get container status \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e05dc21be1fef800a4620fccc67f354715e18c3ff09a4e24c0d55e0650fd085c\": not found" Nov 6 23:44:55.156454 kubelet[3367]: I1106 23:44:55.156423 3367 scope.go:117] "RemoveContainer" containerID="ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a" Nov 6 23:44:55.156634 containerd[1753]: time="2025-11-06T23:44:55.156601424Z" level=error msg="ContainerStatus for \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\": not found" Nov 6 23:44:55.156748 kubelet[3367]: E1106 23:44:55.156720 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\": not found" containerID="ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a" Nov 6 23:44:55.156805 kubelet[3367]: I1106 23:44:55.156749 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a"} err="failed to get container status \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac20ab6273a0536e9a571f065f72a193fd752e982dd5f6a11d26fcc7d0d7423a\": not found" Nov 6 23:44:55.156805 kubelet[3367]: I1106 23:44:55.156773 3367 scope.go:117] "RemoveContainer" containerID="5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944" Nov 6 23:44:55.156982 containerd[1753]: time="2025-11-06T23:44:55.156949826Z" level=error msg="ContainerStatus for \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\": not found" Nov 6 23:44:55.157123 kubelet[3367]: E1106 23:44:55.157078 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\": not found" containerID="5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944" Nov 6 23:44:55.157221 kubelet[3367]: I1106 23:44:55.157125 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944"} err="failed to get container status \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b355d58bf2ff637a6bfad2bc921b54cbb1c89cca7e0c14b51a02e1f8e547944\": not found" Nov 6 23:44:55.157221 kubelet[3367]: I1106 23:44:55.157144 3367 scope.go:117] "RemoveContainer" containerID="709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3" Nov 6 23:44:55.157453 containerd[1753]: time="2025-11-06T23:44:55.157361429Z" level=error msg="ContainerStatus for \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\": not found" Nov 6 23:44:55.157598 kubelet[3367]: E1106 23:44:55.157493 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\": not found" containerID="709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3" Nov 6 23:44:55.157598 kubelet[3367]: I1106 23:44:55.157546 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3"} err="failed to get container status \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"709fdcdb6588d4e360abcd534b3301252e6cea9eb8abf64c238c8b62d0d359f3\": not found" Nov 6 23:44:55.558348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7-rootfs.mount: Deactivated successfully. Nov 6 23:44:55.558483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a32346ee00b9ff24544a5a58470e4f4a0d7f9eac9b4f86dacfa8af22542df2c7-shm.mount: Deactivated successfully. Nov 6 23:44:55.558575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d020ceed13f0ab9d280a36ee856a2ba22c15bb149c4e109fdeffa7826b068b3-rootfs.mount: Deactivated successfully. Nov 6 23:44:55.558662 systemd[1]: var-lib-kubelet-pods-a67a777f\x2d7cf7\x2d4b73\x2db036\x2d1c1df94639f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxt6qc.mount: Deactivated successfully. Nov 6 23:44:55.558753 systemd[1]: var-lib-kubelet-pods-4fa8983e\x2d5240\x2d4db8\x2dae67\x2df06b36071332-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2t24.mount: Deactivated successfully. Nov 6 23:44:55.558849 systemd[1]: var-lib-kubelet-pods-4fa8983e\x2d5240\x2d4db8\x2dae67\x2df06b36071332-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:44:55.558944 systemd[1]: var-lib-kubelet-pods-4fa8983e\x2d5240\x2d4db8\x2dae67\x2df06b36071332-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:44:55.723979 kubelet[3367]: I1106 23:44:55.723933 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa8983e-5240-4db8-ae67-f06b36071332" path="/var/lib/kubelet/pods/4fa8983e-5240-4db8-ae67-f06b36071332/volumes" Nov 6 23:44:55.724690 kubelet[3367]: I1106 23:44:55.724658 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67a777f-7cf7-4b73-b036-1c1df94639f9" path="/var/lib/kubelet/pods/a67a777f-7cf7-4b73-b036-1c1df94639f9/volumes" Nov 6 23:44:55.806659 kubelet[3367]: E1106 23:44:55.806612 3367 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:44:56.609455 sshd[4970]: Connection closed by 10.200.16.10 port 50392 Nov 6 23:44:56.610382 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:56.613444 systemd[1]: sshd@21-10.200.8.12:22-10.200.16.10:50392.service: Deactivated successfully. Nov 6 23:44:56.615727 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 23:44:56.618062 systemd-logind[1729]: Session 24 logged out. Waiting for processes to exit. Nov 6 23:44:56.619174 systemd-logind[1729]: Removed session 24. Nov 6 23:44:56.725600 systemd[1]: Started sshd@22-10.200.8.12:22-10.200.16.10:50394.service - OpenSSH per-connection server daemon (10.200.16.10:50394). Nov 6 23:44:57.349607 sshd[5131]: Accepted publickey for core from 10.200.16.10 port 50394 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:57.351015 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:57.355387 systemd-logind[1729]: New session 25 of user core. Nov 6 23:44:57.363446 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 23:44:58.233838 systemd[1]: Created slice kubepods-burstable-pod7503f8aa_b9b8_44f7_befe_b5b016f96a43.slice - libcontainer container kubepods-burstable-pod7503f8aa_b9b8_44f7_befe_b5b016f96a43.slice. Nov 6 23:44:58.267218 kubelet[3367]: I1106 23:44:58.266696 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-cni-path\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267218 kubelet[3367]: I1106 23:44:58.266740 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-lib-modules\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267218 kubelet[3367]: I1106 23:44:58.266763 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7503f8aa-b9b8-44f7-befe-b5b016f96a43-clustermesh-secrets\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267218 kubelet[3367]: I1106 23:44:58.266783 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7503f8aa-b9b8-44f7-befe-b5b016f96a43-cilium-config-path\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267218 kubelet[3367]: I1106 23:44:58.266806 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-xtables-lock\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267218 kubelet[3367]: I1106 23:44:58.266830 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-cilium-run\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267868 kubelet[3367]: I1106 23:44:58.266853 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-etc-cni-netd\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267868 kubelet[3367]: I1106 23:44:58.266876 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-hostproc\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267868 kubelet[3367]: I1106 23:44:58.266895 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-host-proc-sys-net\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267868 kubelet[3367]: I1106 23:44:58.266916 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-host-proc-sys-kernel\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267868 kubelet[3367]: I1106 23:44:58.266936 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7503f8aa-b9b8-44f7-befe-b5b016f96a43-hubble-tls\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.267868 kubelet[3367]: I1106 23:44:58.266956 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-bpf-maps\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.268107 kubelet[3367]: I1106 23:44:58.266974 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7503f8aa-b9b8-44f7-befe-b5b016f96a43-cilium-cgroup\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.268107 kubelet[3367]: I1106 23:44:58.266994 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7503f8aa-b9b8-44f7-befe-b5b016f96a43-cilium-ipsec-secrets\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.268107 kubelet[3367]: I1106 23:44:58.267017 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz685\" (UniqueName: \"kubernetes.io/projected/7503f8aa-b9b8-44f7-befe-b5b016f96a43-kube-api-access-wz685\") pod \"cilium-rsl5s\" (UID: \"7503f8aa-b9b8-44f7-befe-b5b016f96a43\") " pod="kube-system/cilium-rsl5s" Nov 6 23:44:58.314210 sshd[5133]: Connection closed by 10.200.16.10 port 50394 Nov 6 23:44:58.314927 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:58.318848 systemd[1]: sshd@22-10.200.8.12:22-10.200.16.10:50394.service: Deactivated successfully. Nov 6 23:44:58.321015 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 23:44:58.321881 systemd-logind[1729]: Session 25 logged out. Waiting for processes to exit. Nov 6 23:44:58.322998 systemd-logind[1729]: Removed session 25. Nov 6 23:44:58.431636 systemd[1]: Started sshd@23-10.200.8.12:22-10.200.16.10:50406.service - OpenSSH per-connection server daemon (10.200.16.10:50406). Nov 6 23:44:58.549542 containerd[1753]: time="2025-11-06T23:44:58.549502952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsl5s,Uid:7503f8aa-b9b8-44f7-befe-b5b016f96a43,Namespace:kube-system,Attempt:0,}" Nov 6 23:44:58.582596 containerd[1753]: time="2025-11-06T23:44:58.582029095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:44:58.584409 containerd[1753]: time="2025-11-06T23:44:58.582558598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:44:58.584409 containerd[1753]: time="2025-11-06T23:44:58.582597198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:44:58.584409 containerd[1753]: time="2025-11-06T23:44:58.582697998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:44:58.607501 systemd[1]: Started cri-containerd-dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329.scope - libcontainer container dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329. Nov 6 23:44:58.630653 containerd[1753]: time="2025-11-06T23:44:58.630607410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsl5s,Uid:7503f8aa-b9b8-44f7-befe-b5b016f96a43,Namespace:kube-system,Attempt:0,} returns sandbox id \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\"" Nov 6 23:44:58.643389 containerd[1753]: time="2025-11-06T23:44:58.643254165Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:44:58.671860 containerd[1753]: time="2025-11-06T23:44:58.671777591Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165\"" Nov 6 23:44:58.672583 containerd[1753]: time="2025-11-06T23:44:58.672550495Z" level=info msg="StartContainer for \"ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165\"" Nov 6 23:44:58.705492 systemd[1]: Started cri-containerd-ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165.scope - libcontainer container ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165. Nov 6 23:44:58.734167 containerd[1753]: time="2025-11-06T23:44:58.734133466Z" level=info msg="StartContainer for \"ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165\" returns successfully" Nov 6 23:44:58.743320 systemd[1]: cri-containerd-ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165.scope: Deactivated successfully. Nov 6 23:44:58.807618 containerd[1753]: time="2025-11-06T23:44:58.807055888Z" level=info msg="shim disconnected" id=ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165 namespace=k8s.io Nov 6 23:44:58.807618 containerd[1753]: time="2025-11-06T23:44:58.807123488Z" level=warning msg="cleaning up after shim disconnected" id=ad786999a8abc7aad7981a0f08b7b46e2bccb9002681462011da6074b605c165 namespace=k8s.io Nov 6 23:44:58.807618 containerd[1753]: time="2025-11-06T23:44:58.807134288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:44:58.890289 kubelet[3367]: I1106 23:44:58.890237 3367 setters.go:543] "Node became not ready" node="ci-4230.2.4-n-c920fca088" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-06T23:44:58Z","lastTransitionTime":"2025-11-06T23:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 6 23:44:59.069764 sshd[5151]: Accepted publickey for core from 10.200.16.10 port 50406 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:44:59.071286 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:44:59.076626 systemd-logind[1729]: New session 26 of user core. Nov 6 23:44:59.083455 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 23:44:59.118268 containerd[1753]: time="2025-11-06T23:44:59.118044158Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:44:59.144494 containerd[1753]: time="2025-11-06T23:44:59.142405666Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194\"" Nov 6 23:44:59.146006 containerd[1753]: time="2025-11-06T23:44:59.145969682Z" level=info msg="StartContainer for \"0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194\"" Nov 6 23:44:59.205829 systemd[1]: Started cri-containerd-0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194.scope - libcontainer container 0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194. Nov 6 23:44:59.255000 containerd[1753]: time="2025-11-06T23:44:59.254714361Z" level=info msg="StartContainer for \"0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194\" returns successfully" Nov 6 23:44:59.259473 systemd[1]: cri-containerd-0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194.scope: Deactivated successfully. Nov 6 23:44:59.292473 containerd[1753]: time="2025-11-06T23:44:59.292410327Z" level=info msg="shim disconnected" id=0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194 namespace=k8s.io Nov 6 23:44:59.292473 containerd[1753]: time="2025-11-06T23:44:59.292467927Z" level=warning msg="cleaning up after shim disconnected" id=0035c64cdb87449b7e5f841c276016cb653be66927b0cec6d1cc5b7faa662194 namespace=k8s.io Nov 6 23:44:59.292473 containerd[1753]: time="2025-11-06T23:44:59.292478527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:44:59.533230 sshd[5254]: Connection closed by 10.200.16.10 port 50406 Nov 6 23:44:59.534626 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Nov 6 23:44:59.537496 systemd[1]: sshd@23-10.200.8.12:22-10.200.16.10:50406.service: Deactivated successfully. Nov 6 23:44:59.539813 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 23:44:59.541744 systemd-logind[1729]: Session 26 logged out. Waiting for processes to exit. Nov 6 23:44:59.542812 systemd-logind[1729]: Removed session 26. Nov 6 23:44:59.652705 systemd[1]: Started sshd@24-10.200.8.12:22-10.200.16.10:50412.service - OpenSSH per-connection server daemon (10.200.16.10:50412). Nov 6 23:45:00.121658 containerd[1753]: time="2025-11-06T23:45:00.121600983Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:45:00.258939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481874762.mount: Deactivated successfully. Nov 6 23:45:00.280054 containerd[1753]: time="2025-11-06T23:45:00.279996581Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8\"" Nov 6 23:45:00.280897 containerd[1753]: time="2025-11-06T23:45:00.280836585Z" level=info msg="StartContainer for \"553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8\"" Nov 6 23:45:00.292391 sshd[5321]: Accepted publickey for core from 10.200.16.10 port 50412 ssh2: RSA SHA256:9GWrvebhwQx9uSFlofVHoTo93EtJIJBstCueT1g4cDo Nov 6 23:45:00.294344 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:45:00.311563 systemd-logind[1729]: New session 27 of user core. Nov 6 23:45:00.316490 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 23:45:00.326450 systemd[1]: Started cri-containerd-553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8.scope - libcontainer container 553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8. Nov 6 23:45:00.361707 systemd[1]: cri-containerd-553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8.scope: Deactivated successfully. Nov 6 23:45:00.363420 containerd[1753]: time="2025-11-06T23:45:00.363377448Z" level=info msg="StartContainer for \"553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8\" returns successfully" Nov 6 23:45:00.385678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8-rootfs.mount: Deactivated successfully. Nov 6 23:45:00.400048 containerd[1753]: time="2025-11-06T23:45:00.399970510Z" level=info msg="shim disconnected" id=553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8 namespace=k8s.io Nov 6 23:45:00.400048 containerd[1753]: time="2025-11-06T23:45:00.400045810Z" level=warning msg="cleaning up after shim disconnected" id=553e81235fe68252b72769fd811ebaca9c71b2274d55a0ab62a467f093579ee8 namespace=k8s.io Nov 6 23:45:00.400048 containerd[1753]: time="2025-11-06T23:45:00.400057610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:45:00.808092 kubelet[3367]: E1106 23:45:00.808037 3367 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:45:01.126930 containerd[1753]: time="2025-11-06T23:45:01.126825514Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:45:01.158124 containerd[1753]: time="2025-11-06T23:45:01.158076452Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba\"" Nov 6 23:45:01.161323 containerd[1753]: time="2025-11-06T23:45:01.158834155Z" level=info msg="StartContainer for \"99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba\"" Nov 6 23:45:01.194504 systemd[1]: Started cri-containerd-99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba.scope - libcontainer container 99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba. Nov 6 23:45:01.220553 systemd[1]: cri-containerd-99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba.scope: Deactivated successfully. Nov 6 23:45:01.225480 containerd[1753]: time="2025-11-06T23:45:01.225438649Z" level=info msg="StartContainer for \"99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba\" returns successfully" Nov 6 23:45:01.265002 containerd[1753]: time="2025-11-06T23:45:01.264903823Z" level=info msg="shim disconnected" id=99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba namespace=k8s.io Nov 6 23:45:01.265002 containerd[1753]: time="2025-11-06T23:45:01.264972723Z" level=warning msg="cleaning up after shim disconnected" id=99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba namespace=k8s.io Nov 6 23:45:01.265002 containerd[1753]: time="2025-11-06T23:45:01.264984423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:45:01.385629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99a89d6991b265e9e1ed2506c0770f2c8243bf67dfbb0bf5061407b4ec33d2ba-rootfs.mount: Deactivated successfully. Nov 6 23:45:02.130918 containerd[1753]: time="2025-11-06T23:45:02.130867240Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:45:02.167440 containerd[1753]: time="2025-11-06T23:45:02.167388401Z" level=info msg="CreateContainer within sandbox \"dde266a6635ad0421f47e27eecefef9ee0d625c1a2ba5098b4d77706ef23a329\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f69739f784efd06caa82a3a93b413c896da724445a420cd9d4800a7949a57ec\"" Nov 6 23:45:02.168332 containerd[1753]: time="2025-11-06T23:45:02.168278105Z" level=info msg="StartContainer for \"6f69739f784efd06caa82a3a93b413c896da724445a420cd9d4800a7949a57ec\"" Nov 6 23:45:02.209527 systemd[1]: Started cri-containerd-6f69739f784efd06caa82a3a93b413c896da724445a420cd9d4800a7949a57ec.scope - libcontainer container 6f69739f784efd06caa82a3a93b413c896da724445a420cd9d4800a7949a57ec. Nov 6 23:45:02.245764 containerd[1753]: time="2025-11-06T23:45:02.245707247Z" level=info msg="StartContainer for \"6f69739f784efd06caa82a3a93b413c896da724445a420cd9d4800a7949a57ec\" returns successfully" Nov 6 23:45:02.825391 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 6 23:45:03.145631 kubelet[3367]: I1106 23:45:03.145461 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rsl5s" podStartSLOduration=5.145442826 podStartE2EDuration="5.145442826s" podCreationTimestamp="2025-11-06 23:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:45:03.145127824 +0000 UTC m=+158.406849610" watchObservedRunningTime="2025-11-06 23:45:03.145442826 +0000 UTC m=+158.407164612" Nov 6 23:45:05.708772 systemd-networkd[1344]: lxc_health: Link UP Nov 6 23:45:05.731651 systemd-networkd[1344]: lxc_health: Gained carrier Nov 6 23:45:06.834496 systemd-networkd[1344]: lxc_health: Gained IPv6LL Nov 6 23:45:07.000537 systemd[1]: run-containerd-runc-k8s.io-6f69739f784efd06caa82a3a93b413c896da724445a420cd9d4800a7949a57ec-runc.A2YVR4.mount: Deactivated successfully. Nov 6 23:45:09.242806 systemd[1]: run-containerd-runc-k8s.io-6f69739f784efd06caa82a3a93b413c896da724445a420cd9d4800a7949a57ec-runc.z99vDR.mount: Deactivated successfully. Nov 6 23:45:11.546958 sshd[5342]: Connection closed by 10.200.16.10 port 50412 Nov 6 23:45:11.547809 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Nov 6 23:45:11.550813 systemd[1]: sshd@24-10.200.8.12:22-10.200.16.10:50412.service: Deactivated successfully. Nov 6 23:45:11.553329 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 23:45:11.555069 systemd-logind[1729]: Session 27 logged out. Waiting for processes to exit. Nov 6 23:45:11.556172 systemd-logind[1729]: Removed session 27.