Nov 4 23:52:52.560584 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:52:52.560614 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:52:52.560627 kernel: BIOS-provided physical RAM map: Nov 4 23:52:52.560636 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 4 23:52:52.560643 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 4 23:52:52.560651 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 4 23:52:52.560660 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 4 23:52:52.560667 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 4 23:52:52.560677 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 4 23:52:52.560685 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 4 23:52:52.560692 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 4 23:52:52.560700 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 4 23:52:52.560707 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 4 23:52:52.560715 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 4 23:52:52.560726 kernel: NX (Execute Disable) protection: active Nov 4 23:52:52.560734 kernel: APIC: Static calls initialized Nov 4 23:52:52.560742 kernel: efi: EFI v2.7 by Microsoft Nov 4 23:52:52.560751 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Nov 4 23:52:52.560759 kernel: random: crng init done Nov 4 23:52:52.560767 kernel: secureboot: Secure boot disabled Nov 4 23:52:52.560775 kernel: SMBIOS 3.1.0 present. Nov 4 23:52:52.560783 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Nov 4 23:52:52.560792 kernel: DMI: Memory slots populated: 2/2 Nov 4 23:52:52.560801 kernel: Hypervisor detected: Microsoft Hyper-V Nov 4 23:52:52.560810 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 4 23:52:52.560817 kernel: Hyper-V: Nested features: 0x3e0101 Nov 4 23:52:52.560825 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 4 23:52:52.560833 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 4 23:52:52.560841 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 4 23:52:52.560849 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 4 23:52:52.560857 kernel: tsc: Detected 2299.999 MHz processor Nov 4 23:52:52.560866 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:52:52.560875 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:52:52.560886 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 4 23:52:52.560895 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 4 23:52:52.560904 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:52:52.560913 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 4 23:52:52.560922 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 4 23:52:52.560930 kernel: Using GB pages for direct mapping Nov 4 23:52:52.560939 kernel: ACPI: Early table checksum verification disabled Nov 4 23:52:52.560953 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 4 23:52:52.560962 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:52:52.560971 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:52:52.560980 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 4 23:52:52.560989 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 4 23:52:52.561001 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:52:52.561010 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:52:52.561019 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:52:52.561028 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 4 23:52:52.561037 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 4 23:52:52.561046 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:52:52.561057 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 4 23:52:52.561066 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Nov 4 23:52:52.561074 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 4 23:52:52.561083 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 4 23:52:52.561093 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 4 23:52:52.561102 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 4 23:52:52.561112 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Nov 4 23:52:52.561123 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 4 23:52:52.561133 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 4 23:52:52.561142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 4 23:52:52.561152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 4 23:52:52.561162 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 4 23:52:52.561172 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 4 23:52:52.561181 kernel: Zone ranges: Nov 4 23:52:52.561192 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:52:52.561202 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 4 23:52:52.561211 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 4 23:52:52.561221 kernel: Device empty Nov 4 23:52:52.561230 kernel: Movable zone start for each node Nov 4 23:52:52.561240 kernel: Early memory node ranges Nov 4 23:52:52.561249 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 4 23:52:52.561259 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 4 23:52:52.561270 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 4 23:52:52.561280 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 4 23:52:52.561289 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 4 23:52:52.561299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 4 23:52:52.561308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:52:52.561317 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 4 23:52:52.561327 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 4 23:52:52.561338 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 4 23:52:52.561348 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 4 23:52:52.561357 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:52:52.561367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:52:52.561377 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:52:52.561386 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 4 23:52:52.561395 kernel: TSC deadline timer available Nov 4 23:52:52.561407 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:52:52.561416 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:52:52.561425 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:52:52.561435 kernel: CPU topo: Max. threads per core: 2 Nov 4 23:52:52.561444 kernel: CPU topo: Num. cores per package: 1 Nov 4 23:52:52.561453 kernel: CPU topo: Num. threads per package: 2 Nov 4 23:52:52.561462 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 23:52:52.561474 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 4 23:52:52.564059 kernel: Booting paravirtualized kernel on Hyper-V Nov 4 23:52:52.564073 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:52:52.564084 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 23:52:52.564094 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 23:52:52.564103 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 23:52:52.564113 kernel: pcpu-alloc: [0] 0 1 Nov 4 23:52:52.564127 kernel: Hyper-V: PV spinlocks enabled Nov 4 23:52:52.564136 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 23:52:52.564148 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:52:52.564159 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 4 23:52:52.564169 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 23:52:52.564178 kernel: Fallback order for Node 0: 0 Nov 4 23:52:52.564188 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 4 23:52:52.564199 kernel: Policy zone: Normal Nov 4 23:52:52.564209 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:52:52.564218 kernel: software IO TLB: area num 2. Nov 4 23:52:52.564228 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 23:52:52.564237 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:52:52.564247 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:52:52.564256 kernel: Dynamic Preempt: voluntary Nov 4 23:52:52.564267 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:52:52.564278 kernel: rcu: RCU event tracing is enabled. Nov 4 23:52:52.564288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 23:52:52.564304 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:52:52.564316 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:52:52.564326 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:52:52.564336 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:52:52.564347 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 23:52:52.564357 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:52:52.564368 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:52:52.564378 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:52:52.564387 kernel: Using NULL legacy PIC Nov 4 23:52:52.564397 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 4 23:52:52.564408 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:52:52.564417 kernel: Console: colour dummy device 80x25 Nov 4 23:52:52.564426 kernel: printk: legacy console [tty1] enabled Nov 4 23:52:52.564436 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:52:52.564445 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 4 23:52:52.564454 kernel: ACPI: Core revision 20240827 Nov 4 23:52:52.564464 kernel: Failed to register legacy timer interrupt Nov 4 23:52:52.564475 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:52:52.564503 kernel: x2apic enabled Nov 4 23:52:52.564513 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:52:52.564523 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 4 23:52:52.564532 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 4 23:52:52.564542 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 4 23:52:52.564552 kernel: Hyper-V: Using IPI hypercalls Nov 4 23:52:52.564563 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 4 23:52:52.564573 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 4 23:52:52.564583 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 4 23:52:52.564593 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 4 23:52:52.564602 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 4 23:52:52.564611 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 4 23:52:52.564621 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 4 23:52:52.564632 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Nov 4 23:52:52.564642 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 23:52:52.564651 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 4 23:52:52.564659 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 4 23:52:52.564668 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:52:52.564677 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:52:52.564685 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:52:52.564693 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 4 23:52:52.564702 kernel: RETBleed: Vulnerable Nov 4 23:52:52.564712 kernel: Speculative Store Bypass: Vulnerable Nov 4 23:52:52.564721 kernel: active return thunk: its_return_thunk Nov 4 23:52:52.564730 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 23:52:52.564738 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:52:52.564747 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:52:52.564756 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:52:52.564765 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 4 23:52:52.564774 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 4 23:52:52.564784 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 4 23:52:52.564792 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 4 23:52:52.564804 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 4 23:52:52.564813 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 4 23:52:52.564822 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:52:52.564832 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 4 23:52:52.564841 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 4 23:52:52.564850 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 4 23:52:52.564858 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 4 23:52:52.564867 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 4 23:52:52.564875 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 4 23:52:52.564884 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 4 23:52:52.564895 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:52:52.564904 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:52:52.564912 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:52:52.564920 kernel: landlock: Up and running. Nov 4 23:52:52.564929 kernel: SELinux: Initializing. Nov 4 23:52:52.564938 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:52:52.564948 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:52:52.564957 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 4 23:52:52.564967 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 4 23:52:52.564976 kernel: signal: max sigframe size: 11952 Nov 4 23:52:52.564987 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:52:52.564998 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:52:52.565006 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:52:52.565016 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 23:52:52.565027 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:52:52.565035 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:52:52.565044 kernel: .... node #0, CPUs: #1 Nov 4 23:52:52.565054 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:52:52.565065 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 4 23:52:52.565076 kernel: Memory: 8099552K/8383228K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 277460K reserved, 0K cma-reserved) Nov 4 23:52:52.565086 kernel: devtmpfs: initialized Nov 4 23:52:52.565095 kernel: x86/mm: Memory block size: 128MB Nov 4 23:52:52.565105 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 4 23:52:52.565115 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:52:52.565125 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 23:52:52.565137 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:52:52.565147 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:52:52.565157 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:52:52.565167 kernel: audit: type=2000 audit(1762300367.029:1): state=initialized audit_enabled=0 res=1 Nov 4 23:52:52.565177 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:52:52.565187 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:52:52.565197 kernel: cpuidle: using governor menu Nov 4 23:52:52.565208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:52:52.565218 kernel: dca service started, version 1.12.1 Nov 4 23:52:52.565228 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 4 23:52:52.565238 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 4 23:52:52.565248 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:52:52.565258 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:52:52.565269 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:52:52.565280 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:52:52.565290 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:52:52.565300 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:52:52.565310 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:52:52.565320 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:52:52.565331 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:52:52.565340 kernel: ACPI: Interpreter enabled Nov 4 23:52:52.565351 kernel: ACPI: PM: (supports S0 S5) Nov 4 23:52:52.565360 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:52:52.565369 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:52:52.565378 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 4 23:52:52.565388 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 4 23:52:52.565397 kernel: iommu: Default domain type: Translated Nov 4 23:52:52.565406 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:52:52.565844 kernel: efivars: Registered efivars operations Nov 4 23:52:52.565856 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:52:52.565866 kernel: PCI: System does not support PCI Nov 4 23:52:52.565876 kernel: vgaarb: loaded Nov 4 23:52:52.565886 kernel: clocksource: Switched to clocksource tsc-early Nov 4 23:52:52.565895 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:52:52.565904 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:52:52.565916 kernel: pnp: PnP ACPI init Nov 4 23:52:52.565926 kernel: pnp: PnP ACPI: found 3 devices Nov 4 23:52:52.565935 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:52:52.565945 kernel: NET: Registered PF_INET protocol family Nov 4 23:52:52.565955 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:52:52.565965 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 4 23:52:52.565974 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:52:52.565986 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 23:52:52.565995 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 4 23:52:52.566005 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 4 23:52:52.566014 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 4 23:52:52.566023 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 4 23:52:52.566033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:52:52.566042 kernel: NET: Registered PF_XDP protocol family Nov 4 23:52:52.566054 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:52:52.566063 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 4 23:52:52.566073 kernel: software IO TLB: mapped [mem 0x00000000366e8000-0x000000003a6e8000] (64MB) Nov 4 23:52:52.566083 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 4 23:52:52.566092 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 4 23:52:52.566102 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 4 23:52:52.566111 kernel: clocksource: Switched to clocksource tsc Nov 4 23:52:52.566122 kernel: Initialise system trusted keyrings Nov 4 23:52:52.566132 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 4 23:52:52.566141 kernel: Key type asymmetric registered Nov 4 23:52:52.566151 kernel: Asymmetric key parser 'x509' registered Nov 4 23:52:52.566160 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:52:52.566170 kernel: io scheduler mq-deadline registered Nov 4 23:52:52.566179 kernel: io scheduler kyber registered Nov 4 23:52:52.566190 kernel: io scheduler bfq registered Nov 4 23:52:52.566199 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:52:52.566209 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:52:52.566218 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:52:52.566228 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 4 23:52:52.566238 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:52:52.566247 kernel: i8042: PNP: No PS/2 controller found. Nov 4 23:52:52.566426 kernel: rtc_cmos 00:02: registered as rtc0 Nov 4 23:52:52.566552 kernel: rtc_cmos 00:02: setting system clock to 2025-11-04T23:52:48 UTC (1762300368) Nov 4 23:52:52.566656 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 4 23:52:52.566668 kernel: intel_pstate: Intel P-state driver initializing Nov 4 23:52:52.566678 kernel: efifb: probing for efifb Nov 4 23:52:52.566688 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 4 23:52:52.566700 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 4 23:52:52.566709 kernel: efifb: scrolling: redraw Nov 4 23:52:52.566718 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 4 23:52:52.566733 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 23:52:52.566742 kernel: fb0: EFI VGA frame buffer device Nov 4 23:52:52.566751 kernel: pstore: Using crash dump compression: deflate Nov 4 23:52:52.566761 kernel: pstore: Registered efi_pstore as persistent store backend Nov 4 23:52:52.566770 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:52:52.566781 kernel: Segment Routing with IPv6 Nov 4 23:52:52.566791 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:52:52.566800 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:52:52.566809 kernel: Key type dns_resolver registered Nov 4 23:52:52.566818 kernel: IPI shorthand broadcast: enabled Nov 4 23:52:52.566827 kernel: sched_clock: Marking stable (1593004806, 114491516)->(2102519830, -395023508) Nov 4 23:52:52.566837 kernel: registered taskstats version 1 Nov 4 23:52:52.566849 kernel: Loading compiled-in X.509 certificates Nov 4 23:52:52.566858 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:52:52.566868 kernel: Demotion targets for Node 0: null Nov 4 23:52:52.566878 kernel: Key type .fscrypt registered Nov 4 23:52:52.566887 kernel: Key type fscrypt-provisioning registered Nov 4 23:52:52.566896 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:52:52.566906 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:52:52.566916 kernel: ima: No architecture policies found Nov 4 23:52:52.566925 kernel: clk: Disabling unused clocks Nov 4 23:52:52.566935 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:52:52.566945 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:52:52.566954 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:52:52.566963 kernel: Run /init as init process Nov 4 23:52:52.566972 kernel: with arguments: Nov 4 23:52:52.566987 kernel: /init Nov 4 23:52:52.567002 kernel: with environment: Nov 4 23:52:52.567012 kernel: HOME=/ Nov 4 23:52:52.567021 kernel: TERM=linux Nov 4 23:52:52.567030 kernel: hv_vmbus: Vmbus version:5.3 Nov 4 23:52:52.567040 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 4 23:52:52.567050 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 4 23:52:52.567059 kernel: PTP clock support registered Nov 4 23:52:52.567070 kernel: hv_utils: Registering HyperV Utility Driver Nov 4 23:52:52.567079 kernel: hv_vmbus: registering driver hv_utils Nov 4 23:52:52.567088 kernel: hv_utils: Shutdown IC version 3.2 Nov 4 23:52:52.567097 kernel: hv_utils: Heartbeat IC version 3.0 Nov 4 23:52:52.567107 kernel: hv_utils: TimeSync IC version 4.0 Nov 4 23:52:52.567116 kernel: SCSI subsystem initialized Nov 4 23:52:52.567126 kernel: hv_vmbus: registering driver hv_pci Nov 4 23:52:52.567268 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 4 23:52:52.567872 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 4 23:52:52.568008 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 4 23:52:52.568127 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 4 23:52:52.568266 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 4 23:52:52.568402 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 4 23:52:52.568621 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 4 23:52:52.568761 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 4 23:52:52.568773 kernel: hv_vmbus: registering driver hv_storvsc Nov 4 23:52:52.568934 kernel: scsi host0: storvsc_host_t Nov 4 23:52:52.569044 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 4 23:52:52.569054 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 4 23:52:52.569060 kernel: hv_vmbus: registering driver hid_hyperv Nov 4 23:52:52.569067 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 4 23:52:52.569156 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 4 23:52:52.569165 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 4 23:52:52.569173 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 4 23:52:52.569252 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 4 23:52:52.569344 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 4 23:52:52.569411 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 4 23:52:52.569419 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 4 23:52:52.569527 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 4 23:52:52.569538 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:52:52.569624 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 4 23:52:52.569632 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:52:52.569639 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:52:52.569645 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:52:52.569651 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:52:52.569660 kernel: raid6: avx512x4 gen() 44537 MB/s Nov 4 23:52:52.569676 kernel: raid6: avx512x2 gen() 44223 MB/s Nov 4 23:52:52.569684 kernel: raid6: avx512x1 gen() 25110 MB/s Nov 4 23:52:52.569690 kernel: raid6: avx2x4 gen() 35648 MB/s Nov 4 23:52:52.569696 kernel: raid6: avx2x2 gen() 37497 MB/s Nov 4 23:52:52.569703 kernel: raid6: avx2x1 gen() 30581 MB/s Nov 4 23:52:52.569709 kernel: raid6: using algorithm avx512x4 gen() 44537 MB/s Nov 4 23:52:52.569715 kernel: raid6: .... xor() 7715 MB/s, rmw enabled Nov 4 23:52:52.569723 kernel: raid6: using avx512x2 recovery algorithm Nov 4 23:52:52.569729 kernel: xor: automatically using best checksumming function avx Nov 4 23:52:52.569736 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:52:52.569743 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (954) Nov 4 23:52:52.569749 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:52:52.569755 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:52:52.569762 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 23:52:52.569770 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:52:52.569776 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:52:52.569783 kernel: loop: module loaded Nov 4 23:52:52.569789 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:52:52.569795 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:52:52.569803 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:52:52.569814 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:52:52.569821 systemd[1]: Detected virtualization microsoft. Nov 4 23:52:52.569828 systemd[1]: Detected architecture x86-64. Nov 4 23:52:52.569834 systemd[1]: Running in initrd. Nov 4 23:52:52.569841 systemd[1]: No hostname configured, using default hostname. Nov 4 23:52:52.569848 systemd[1]: Hostname set to . Nov 4 23:52:52.569856 systemd[1]: Initializing machine ID from random generator. Nov 4 23:52:52.569862 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:52:52.569869 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:52:52.569876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:52:52.569882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:52:52.569890 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:52:52.569897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:52:52.569905 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:52:52.569912 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:52:52.569919 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:52:52.569928 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:52:52.569934 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:52:52.569941 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:52:52.569948 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:52:52.569955 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:52:52.569961 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:52:52.569969 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:52:52.569978 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:52:52.569984 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:52:52.569991 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:52:52.569998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:52:52.570005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:52:52.570012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:52:52.570020 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:52:52.570027 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:52:52.570033 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:52:52.570040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:52:52.570047 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:52:52.570054 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:52:52.570061 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:52:52.570069 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:52:52.570075 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:52:52.570093 systemd-journald[1087]: Collecting audit messages is disabled. Nov 4 23:52:52.570112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:52:52.570120 systemd-journald[1087]: Journal started Nov 4 23:52:52.570138 systemd-journald[1087]: Runtime Journal (/run/log/journal/421af98035ac4e138434b7b05ebe3d89) is 8M, max 158.6M, 150.6M free. Nov 4 23:52:52.579550 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:52:52.584496 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:52:52.587918 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:52:52.590746 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:52:52.610822 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:52:52.613603 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:52:52.653498 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:52:52.655422 systemd-modules-load[1092]: Inserted module 'br_netfilter' Nov 4 23:52:52.656022 kernel: Bridge firewalling registered Nov 4 23:52:52.657468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:52:52.660585 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:52:52.777363 systemd-tmpfiles[1102]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:52:52.778895 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:52:52.779261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:52:52.782599 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:52:52.783231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:52:52.804422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:52:52.808305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:52:52.812446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:52:52.817972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:52:52.853193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:52:52.908805 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:52:52.964940 systemd-resolved[1112]: Positive Trust Anchors: Nov 4 23:52:52.964955 systemd-resolved[1112]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:52:52.964959 systemd-resolved[1112]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:52:52.990381 dracut-cmdline[1131]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:52:52.964999 systemd-resolved[1112]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:52:52.985981 systemd-resolved[1112]: Defaulting to hostname 'linux'. Nov 4 23:52:52.986967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:52:53.021660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:52:53.141500 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:52:53.218504 kernel: iscsi: registered transport (tcp) Nov 4 23:52:53.290712 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:52:53.290787 kernel: QLogic iSCSI HBA Driver Nov 4 23:52:53.339646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:52:53.359475 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:52:53.360313 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:52:53.399860 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:52:53.402775 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:52:53.408175 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:52:53.437599 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:52:53.444612 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:52:53.476565 systemd-udevd[1378]: Using default interface naming scheme 'v257'. Nov 4 23:52:53.490082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:52:53.497915 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:52:53.507206 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:52:53.517629 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:52:53.523618 dracut-pre-trigger[1464]: rd.md=0: removing MD RAID activation Nov 4 23:52:53.549394 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:52:53.555958 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:52:53.566433 systemd-networkd[1476]: lo: Link UP Nov 4 23:52:53.566436 systemd-networkd[1476]: lo: Gained carrier Nov 4 23:52:53.579021 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:52:53.586045 systemd[1]: Reached target network.target - Network. Nov 4 23:52:53.623012 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:52:53.631899 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:52:53.697644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:52:53.699396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:52:53.703775 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:52:53.710839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:52:53.749719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:52:53.769497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 4 23:52:53.771520 kernel: hv_vmbus: registering driver hv_netvsc Nov 4 23:52:53.781498 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fadad3 (unnamed net_device) (uninitialized): VF slot 1 added Nov 4 23:52:53.789523 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:52:53.815691 systemd-networkd[1476]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:52:53.815701 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:52:53.816433 systemd-networkd[1476]: eth0: Link UP Nov 4 23:52:53.816550 systemd-networkd[1476]: eth0: Gained carrier Nov 4 23:52:53.816562 systemd-networkd[1476]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:52:53.828515 systemd-networkd[1476]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 4 23:52:53.850503 kernel: AES CTR mode by8 optimization enabled Nov 4 23:52:53.951500 kernel: nvme nvme0: using unchecked data buffer Nov 4 23:52:54.054249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 4 23:52:54.057984 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:52:54.181436 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 4 23:52:54.193131 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 4 23:52:54.208144 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 4 23:52:54.297463 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:52:54.300719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:52:54.303978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:52:54.306623 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:52:54.318623 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:52:54.399025 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:52:54.798502 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 4 23:52:54.802461 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 4 23:52:54.802688 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 4 23:52:54.804153 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 4 23:52:54.810619 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 4 23:52:54.814615 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 4 23:52:54.819536 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 4 23:52:54.819616 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 4 23:52:54.835558 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 4 23:52:54.835769 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 4 23:52:54.839598 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 4 23:52:54.857661 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 4 23:52:54.867496 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 4 23:52:54.871501 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fadad3 eth0: VF registering: eth1 Nov 4 23:52:54.871688 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 4 23:52:54.876614 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 4 23:52:54.876515 systemd-networkd[1476]: eth1: Interface name change detected, renamed to enP30832s1. Nov 4 23:52:54.976506 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 4 23:52:54.979516 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 4 23:52:54.981194 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fadad3 eth0: Data path switched to VF: enP30832s1 Nov 4 23:52:54.981092 systemd-networkd[1476]: enP30832s1: Link UP Nov 4 23:52:54.982471 systemd-networkd[1476]: enP30832s1: Gained carrier Nov 4 23:52:55.085720 systemd-networkd[1476]: eth0: Gained IPv6LL Nov 4 23:52:55.397258 disk-uuid[1653]: Warning: The kernel is still using the old partition table. Nov 4 23:52:55.397258 disk-uuid[1653]: The new table will be used at the next reboot or after you Nov 4 23:52:55.397258 disk-uuid[1653]: run partprobe(8) or kpartx(8) Nov 4 23:52:55.397258 disk-uuid[1653]: The operation has completed successfully. Nov 4 23:52:55.409331 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:52:55.409438 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:52:55.413110 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:52:55.481500 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1699) Nov 4 23:52:55.485272 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:52:55.485305 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:52:55.544750 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 23:52:55.544806 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 4 23:52:55.546156 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 23:52:55.552561 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:52:55.553678 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:52:55.556892 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:52:56.430024 ignition[1718]: Ignition 2.22.0 Nov 4 23:52:56.430038 ignition[1718]: Stage: fetch-offline Nov 4 23:52:56.432834 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:52:56.430270 ignition[1718]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:52:56.438351 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 23:52:56.430280 ignition[1718]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:52:56.430378 ignition[1718]: parsed url from cmdline: "" Nov 4 23:52:56.430381 ignition[1718]: no config URL provided Nov 4 23:52:56.430386 ignition[1718]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:52:56.430396 ignition[1718]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:52:56.430400 ignition[1718]: failed to fetch config: resource requires networking Nov 4 23:52:56.431523 ignition[1718]: Ignition finished successfully Nov 4 23:52:56.477431 ignition[1724]: Ignition 2.22.0 Nov 4 23:52:56.477442 ignition[1724]: Stage: fetch Nov 4 23:52:56.477668 ignition[1724]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:52:56.477676 ignition[1724]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:52:56.477753 ignition[1724]: parsed url from cmdline: "" Nov 4 23:52:56.477757 ignition[1724]: no config URL provided Nov 4 23:52:56.477762 ignition[1724]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:52:56.477768 ignition[1724]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:52:56.477788 ignition[1724]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 4 23:52:56.568963 ignition[1724]: GET result: OK Nov 4 23:52:56.569041 ignition[1724]: config has been read from IMDS userdata Nov 4 23:52:56.569078 ignition[1724]: parsing config with SHA512: 602732172948899826ee373befae239a379468a204e238fc4ade02eec0ec3d13006e108bd51392ded7b3977b411c9d8548aab4fc483cd363504ba9ecf27a2c2d Nov 4 23:52:56.575853 unknown[1724]: fetched base config from "system" Nov 4 23:52:56.575864 unknown[1724]: fetched base config from "system" Nov 4 23:52:56.576204 ignition[1724]: fetch: fetch complete Nov 4 23:52:56.575869 unknown[1724]: fetched user config from "azure" Nov 4 23:52:56.576209 ignition[1724]: fetch: fetch passed Nov 4 23:52:56.578624 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 23:52:56.576249 ignition[1724]: Ignition finished successfully Nov 4 23:52:56.581659 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:52:56.608929 ignition[1730]: Ignition 2.22.0 Nov 4 23:52:56.608941 ignition[1730]: Stage: kargs Nov 4 23:52:56.611608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:52:56.609167 ignition[1730]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:52:56.617061 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:52:56.609175 ignition[1730]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:52:56.609954 ignition[1730]: kargs: kargs passed Nov 4 23:52:56.609986 ignition[1730]: Ignition finished successfully Nov 4 23:52:56.642731 ignition[1737]: Ignition 2.22.0 Nov 4 23:52:56.642741 ignition[1737]: Stage: disks Nov 4 23:52:56.645375 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:52:56.642982 ignition[1737]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:52:56.648539 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:52:56.643758 ignition[1737]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:52:56.653289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:52:56.644371 ignition[1737]: disks: disks passed Nov 4 23:52:56.659226 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:52:56.644398 ignition[1737]: Ignition finished successfully Nov 4 23:52:56.664881 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:52:56.667535 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:52:56.673602 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:52:56.819260 systemd-fsck[1745]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Nov 4 23:52:56.824000 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:52:56.830820 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:52:58.580697 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:52:58.581290 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:52:58.583533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:52:58.629499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:52:58.646967 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:52:58.657621 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 4 23:52:58.663773 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1754) Nov 4 23:52:58.663804 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:52:58.664715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:52:58.664753 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:52:58.667496 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:52:58.668127 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:52:58.679591 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 23:52:58.679616 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 4 23:52:58.680555 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 23:52:58.683965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:52:58.688614 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:52:59.226459 coreos-metadata[1756]: Nov 04 23:52:59.226 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 4 23:52:59.230591 coreos-metadata[1756]: Nov 04 23:52:59.230 INFO Fetch successful Nov 4 23:52:59.230591 coreos-metadata[1756]: Nov 04 23:52:59.230 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 4 23:52:59.238843 coreos-metadata[1756]: Nov 04 23:52:59.238 INFO Fetch successful Nov 4 23:52:59.254296 coreos-metadata[1756]: Nov 04 23:52:59.254 INFO wrote hostname ci-4487.0.0-n-a41d436e4a to /sysroot/etc/hostname Nov 4 23:52:59.257332 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:52:59.533473 initrd-setup-root[1785]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:52:59.583010 initrd-setup-root[1792]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:52:59.616785 initrd-setup-root[1799]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:52:59.621285 initrd-setup-root[1806]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:53:00.856240 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:53:00.862588 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:53:00.867391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:53:00.903707 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:53:00.909519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:00.921678 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:53:00.941114 ignition[1875]: INFO : Ignition 2.22.0 Nov 4 23:53:00.941114 ignition[1875]: INFO : Stage: mount Nov 4 23:53:00.941114 ignition[1875]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:00.941114 ignition[1875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:53:00.941114 ignition[1875]: INFO : mount: mount passed Nov 4 23:53:00.941114 ignition[1875]: INFO : Ignition finished successfully Nov 4 23:53:00.942519 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:53:00.946327 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:53:00.967901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:53:00.992512 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1886) Nov 4 23:53:00.994792 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:00.994907 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:01.000828 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 23:53:01.000861 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 4 23:53:01.002428 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 23:53:01.004355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:53:01.033232 ignition[1903]: INFO : Ignition 2.22.0 Nov 4 23:53:01.033232 ignition[1903]: INFO : Stage: files Nov 4 23:53:01.035446 ignition[1903]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:01.035446 ignition[1903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:53:01.035446 ignition[1903]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:53:01.048392 ignition[1903]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:53:01.048392 ignition[1903]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:53:01.123781 ignition[1903]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:53:01.127550 ignition[1903]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:53:01.127550 ignition[1903]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:53:01.124109 unknown[1903]: wrote ssh authorized keys file for user: core Nov 4 23:53:01.167163 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:53:01.172561 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:53:01.223119 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:53:01.265102 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:53:01.267567 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:53:01.267567 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 4 23:53:01.600563 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 23:53:01.868040 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:53:01.868040 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:53:01.875536 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:53:01.875536 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:53:01.875536 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:53:01.875536 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:53:01.875536 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:53:01.875536 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:53:01.875536 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:53:01.917574 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:53:01.920217 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:53:01.924529 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:53:01.928312 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:53:01.928312 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:53:01.928312 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 23:53:02.182818 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 23:53:02.732524 ignition[1903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:53:02.732524 ignition[1903]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 23:53:02.760711 ignition[1903]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:53:02.768028 ignition[1903]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:53:02.768028 ignition[1903]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 23:53:02.775185 ignition[1903]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:53:02.775185 ignition[1903]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:53:02.775185 ignition[1903]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:53:02.775185 ignition[1903]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:53:02.775185 ignition[1903]: INFO : files: files passed Nov 4 23:53:02.775185 ignition[1903]: INFO : Ignition finished successfully Nov 4 23:53:02.773054 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:53:02.776207 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:53:02.792791 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:53:02.796945 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:53:02.797023 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:53:02.821847 initrd-setup-root-after-ignition[1935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:02.821847 initrd-setup-root-after-ignition[1935]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:02.833425 initrd-setup-root-after-ignition[1939]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:02.824156 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:53:02.841714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:53:02.843272 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:53:02.885281 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:53:02.885376 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:53:02.889769 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:53:02.893166 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:53:02.898181 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:53:02.898896 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:53:02.926862 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:53:02.930157 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:53:02.945391 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:53:02.946186 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:02.949199 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:02.957085 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:53:02.961444 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:53:02.961622 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:53:02.968633 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:53:02.970224 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:53:02.975342 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:53:02.979646 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:53:02.980005 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:53:02.984765 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:53:02.989167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:53:02.992586 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:53:02.997658 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:53:03.002650 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:53:03.006635 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:53:03.010609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:53:03.010742 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:53:03.017580 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:03.020081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:03.024611 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:53:03.026002 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:03.030033 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:53:03.030154 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:53:03.036639 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:53:03.036776 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:53:03.040276 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:53:03.040955 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:53:03.046397 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 4 23:53:03.046521 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:53:03.053670 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:53:03.060871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:53:03.069758 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:53:03.071521 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:03.076147 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:53:03.076302 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:03.081455 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:53:03.081666 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:53:03.094294 ignition[1959]: INFO : Ignition 2.22.0 Nov 4 23:53:03.094294 ignition[1959]: INFO : Stage: umount Nov 4 23:53:03.094294 ignition[1959]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:03.094294 ignition[1959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:53:03.094294 ignition[1959]: INFO : umount: umount passed Nov 4 23:53:03.094294 ignition[1959]: INFO : Ignition finished successfully Nov 4 23:53:03.098757 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:53:03.098837 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:53:03.105066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:53:03.105152 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:53:03.113626 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:53:03.113760 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:53:03.124398 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:53:03.124453 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:53:03.130234 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 23:53:03.130284 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 23:53:03.133107 systemd[1]: Stopped target network.target - Network. Nov 4 23:53:03.137527 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:53:03.137577 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:53:03.141579 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:53:03.144524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:53:03.146051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:03.149660 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:53:03.153127 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:53:03.159783 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:53:03.159823 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:53:03.164557 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:53:03.164587 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:53:03.165353 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:53:03.165396 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:53:03.165703 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:53:03.165736 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:53:03.166449 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:53:03.167040 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:53:03.167983 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:53:03.174548 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:53:03.174642 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:53:03.181185 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:53:03.182551 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:53:03.185846 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:53:03.185984 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:53:03.196187 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:53:03.214560 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:53:03.214602 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:03.219568 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:53:03.219619 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:53:03.224563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:53:03.227564 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:53:03.227630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:53:03.231857 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:53:03.232260 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:03.233916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:53:03.234878 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:03.246566 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:03.261349 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:53:03.261506 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:03.268604 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:53:03.268643 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:03.272404 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:53:03.272440 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:03.277542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:53:03.277580 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:53:03.284595 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:53:03.284654 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:53:03.290456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:53:03.290543 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:53:03.299190 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:53:03.302658 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:53:03.303580 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:03.307577 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:53:03.307629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:03.319725 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 23:53:03.319764 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:53:03.332575 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:53:03.332629 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:03.337567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:03.337613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:03.340745 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:53:03.340848 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:53:03.360566 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fadad3 eth0: Data path switched from VF: enP30832s1 Nov 4 23:53:03.360779 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 4 23:53:03.361892 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:53:03.362254 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:53:03.366846 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:53:03.371634 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:53:03.398457 systemd[1]: Switching root. Nov 4 23:53:03.472332 systemd-journald[1087]: Journal stopped Nov 4 23:53:12.095455 systemd-journald[1087]: Received SIGTERM from PID 1 (systemd). Nov 4 23:53:12.095499 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:53:12.095517 kernel: SELinux: policy capability open_perms=1 Nov 4 23:53:12.095528 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:53:12.095537 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:53:12.095547 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:53:12.095559 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:53:12.095570 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:53:12.095580 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:53:12.095589 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:53:12.095600 kernel: audit: type=1403 audit(1762300385.258:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:53:12.095612 systemd[1]: Successfully loaded SELinux policy in 249.926ms. Nov 4 23:53:12.095624 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.399ms. Nov 4 23:53:12.095638 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:53:12.095650 systemd[1]: Detected virtualization microsoft. Nov 4 23:53:12.095660 systemd[1]: Detected architecture x86-64. Nov 4 23:53:12.095671 systemd[1]: Detected first boot. Nov 4 23:53:12.095684 systemd[1]: Hostname set to . Nov 4 23:53:12.095695 systemd[1]: Initializing machine ID from random generator. Nov 4 23:53:12.095706 zram_generator::config[2003]: No configuration found. Nov 4 23:53:12.095718 kernel: Guest personality initialized and is inactive Nov 4 23:53:12.095727 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Nov 4 23:53:12.095737 kernel: Initialized host personality Nov 4 23:53:12.095749 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:53:12.095760 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:53:12.095771 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:53:12.095781 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:53:12.095792 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:53:12.095804 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:53:12.095816 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:53:12.095828 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:53:12.095839 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:53:12.095851 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:53:12.095862 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:53:12.095873 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:53:12.095885 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:53:12.095896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:12.095907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:12.095917 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:53:12.095928 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:53:12.095943 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:53:12.095956 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:53:12.095968 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:53:12.095979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:12.095990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:12.096001 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:53:12.096012 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:53:12.096025 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:53:12.096036 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:53:12.096048 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:12.096058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:53:12.096069 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:53:12.096080 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:53:12.096091 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:53:12.096105 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:53:12.096116 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:53:12.096128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:12.096138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:12.096150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:12.096161 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:53:12.096172 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:53:12.096184 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:53:12.096195 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:53:12.096206 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:12.096219 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:53:12.096230 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:53:12.096241 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:53:12.096253 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:53:12.096264 systemd[1]: Reached target machines.target - Containers. Nov 4 23:53:12.096275 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:53:12.096287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:12.096300 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:53:12.096311 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:53:12.096322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:12.096333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:53:12.096344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:12.096355 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:53:12.096366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:12.096380 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:53:12.096391 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:53:12.096402 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:53:12.096412 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:53:12.096423 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:53:12.096435 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:12.096448 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:53:12.096459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:53:12.096471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:53:12.096493 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:53:12.096504 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:53:12.096515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:53:12.096527 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:12.096540 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:53:12.096551 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:53:12.096561 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:53:12.096573 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:53:12.096584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:53:12.096595 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:53:12.096606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:12.096619 kernel: fuse: init (API version 7.41) Nov 4 23:53:12.096629 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:53:12.096641 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:53:12.096651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:12.096662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:12.096674 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:53:12.096701 systemd-journald[2086]: Collecting audit messages is disabled. Nov 4 23:53:12.096727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:12.096738 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:12.096750 systemd-journald[2086]: Journal started Nov 4 23:53:12.096775 systemd-journald[2086]: Runtime Journal (/run/log/journal/f0d6018cc5dc44c5a8202c3e32559461) is 8M, max 158.6M, 150.6M free. Nov 4 23:53:11.588355 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:53:11.599109 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 4 23:53:11.599515 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:53:12.102498 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:53:12.103854 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:53:12.104006 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:53:12.105870 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:12.106014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:12.107958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:12.110657 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:53:12.118454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:12.123809 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:53:12.125929 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:53:12.132576 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:53:12.136647 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:53:12.139089 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:53:12.139201 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:53:12.146220 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:53:12.149140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:12.150466 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:53:12.157610 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:53:12.160738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:12.162629 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:53:12.165638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:12.168619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:53:12.173652 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:53:12.179663 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:53:12.188201 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:53:12.193729 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:53:12.198430 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:53:12.214873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:12.231685 kernel: ACPI: bus type drm_connector registered Nov 4 23:53:12.230916 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:53:12.231101 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:53:12.236277 systemd-journald[2086]: Time spent on flushing to /var/log/journal/f0d6018cc5dc44c5a8202c3e32559461 is 10.726ms for 975 entries. Nov 4 23:53:12.236277 systemd-journald[2086]: System Journal (/var/log/journal/f0d6018cc5dc44c5a8202c3e32559461) is 8M, max 2.2G, 2.2G free. Nov 4 23:53:12.352066 systemd-journald[2086]: Received client request to flush runtime journal. Nov 4 23:53:12.352125 kernel: loop1: detected capacity change from 0 to 128048 Nov 4 23:53:12.272215 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:53:12.274465 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:53:12.277932 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:53:12.338312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:12.353724 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:53:12.388817 systemd-tmpfiles[2143]: ACLs are not supported, ignoring. Nov 4 23:53:12.388833 systemd-tmpfiles[2143]: ACLs are not supported, ignoring. Nov 4 23:53:12.392555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:53:12.396598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:53:12.503638 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:53:12.600672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:53:12.800506 kernel: loop2: detected capacity change from 0 to 27752 Nov 4 23:53:13.124662 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:53:13.131118 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:53:13.136617 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:53:13.149959 systemd-tmpfiles[2165]: ACLs are not supported, ignoring. Nov 4 23:53:13.149976 systemd-tmpfiles[2165]: ACLs are not supported, ignoring. Nov 4 23:53:13.152658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:13.207721 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:53:13.211811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:13.229945 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:53:13.246943 systemd-udevd[2169]: Using default interface naming scheme 'v257'. Nov 4 23:53:13.278769 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:53:13.306497 kernel: loop3: detected capacity change from 0 to 229808 Nov 4 23:53:13.374673 kernel: loop4: detected capacity change from 0 to 110984 Nov 4 23:53:13.375249 systemd-resolved[2164]: Positive Trust Anchors: Nov 4 23:53:13.375267 systemd-resolved[2164]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:53:13.375272 systemd-resolved[2164]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:53:13.375306 systemd-resolved[2164]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:53:13.654112 systemd-resolved[2164]: Using system hostname 'ci-4487.0.0-n-a41d436e4a'. Nov 4 23:53:13.655179 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:53:13.658628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:13.940504 kernel: loop5: detected capacity change from 0 to 128048 Nov 4 23:53:13.942250 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:13.948811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:53:13.971507 kernel: loop6: detected capacity change from 0 to 27752 Nov 4 23:53:14.001867 kernel: loop7: detected capacity change from 0 to 229808 Nov 4 23:53:14.004200 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:53:14.024507 kernel: loop1: detected capacity change from 0 to 110984 Nov 4 23:53:14.045966 (sd-merge)[2180]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Nov 4 23:53:14.052934 (sd-merge)[2180]: Merged extensions into '/usr'. Nov 4 23:53:14.059512 systemd[1]: Reload requested from client PID 2142 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:53:14.059620 systemd[1]: Reloading... Nov 4 23:53:14.086504 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:53:14.113519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#315 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 4 23:53:14.125412 kernel: hv_vmbus: registering driver hyperv_fb Nov 4 23:53:14.136538 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 4 23:53:14.136587 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 4 23:53:14.140508 kernel: Console: switching to colour dummy device 80x25 Nov 4 23:53:14.148926 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 23:53:14.151687 kernel: hv_vmbus: registering driver hv_balloon Nov 4 23:53:14.151748 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 4 23:53:14.158016 systemd-networkd[2190]: lo: Link UP Nov 4 23:53:14.158020 systemd-networkd[2190]: lo: Gained carrier Nov 4 23:53:14.163387 systemd-networkd[2190]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:14.164962 systemd-networkd[2190]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:53:14.167510 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 4 23:53:14.170509 zram_generator::config[2260]: No configuration found. Nov 4 23:53:14.172043 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 4 23:53:14.172293 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fadad3 eth0: Data path switched to VF: enP30832s1 Nov 4 23:53:14.174606 systemd-networkd[2190]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:14.174678 systemd-networkd[2190]: enP30832s1: Link UP Nov 4 23:53:14.174807 systemd-networkd[2190]: eth0: Link UP Nov 4 23:53:14.174816 systemd-networkd[2190]: eth0: Gained carrier Nov 4 23:53:14.174829 systemd-networkd[2190]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:14.180668 systemd-networkd[2190]: enP30832s1: Gained carrier Nov 4 23:53:14.193572 systemd-networkd[2190]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 4 23:53:14.546556 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 4 23:53:14.572255 systemd[1]: Reloading finished in 512 ms. Nov 4 23:53:14.588715 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:53:14.592782 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:53:14.629165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 4 23:53:14.634322 systemd[1]: Reached target network.target - Network. Nov 4 23:53:14.643264 systemd[1]: Starting ensure-sysext.service... Nov 4 23:53:14.647403 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:53:14.659107 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:53:14.663625 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:53:14.668647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:53:14.676635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:14.691249 systemd-tmpfiles[2342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:53:14.691278 systemd-tmpfiles[2342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:53:14.691497 systemd-tmpfiles[2342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:53:14.691752 systemd-tmpfiles[2342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:53:14.692450 systemd-tmpfiles[2342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:53:14.692691 systemd-tmpfiles[2342]: ACLs are not supported, ignoring. Nov 4 23:53:14.692777 systemd-tmpfiles[2342]: ACLs are not supported, ignoring. Nov 4 23:53:14.693712 systemd[1]: Reload requested from client PID 2338 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:53:14.693787 systemd[1]: Reloading... Nov 4 23:53:14.761517 zram_generator::config[2380]: No configuration found. Nov 4 23:53:14.786064 systemd-tmpfiles[2342]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:53:14.786074 systemd-tmpfiles[2342]: Skipping /boot Nov 4 23:53:14.792915 systemd-tmpfiles[2342]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:53:14.792926 systemd-tmpfiles[2342]: Skipping /boot Nov 4 23:53:14.948839 systemd[1]: Reloading finished in 254 ms. Nov 4 23:53:14.980134 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:53:14.981004 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:53:14.983016 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:14.992226 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:53:14.994706 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:53:15.004441 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:53:15.007707 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:53:15.011205 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:53:15.014455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:15.014853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:15.020112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:15.025092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:15.029802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:15.032810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:15.032999 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:15.033101 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:15.034829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:15.035410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:15.036329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:15.036475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:15.038142 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:15.042651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:15.042789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:15.046179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:15.049769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:15.053611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:15.053742 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:15.053850 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:15.055382 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:15.056589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:15.061253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:15.061429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:15.072935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:15.073079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:15.075239 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:53:15.079021 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:15.079343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:15.080113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:15.082601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:53:15.085680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:15.086066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:15.086102 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:15.086140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:15.086179 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:53:15.086235 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:15.087704 systemd[1]: Finished ensure-sysext.service. Nov 4 23:53:15.097357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:15.097549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:15.098287 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:53:15.098425 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:53:15.102903 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:15.103059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:15.103801 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:15.341823 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:53:15.438634 systemd-networkd[2190]: eth0: Gained IPv6LL Nov 4 23:53:15.440596 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:53:15.441337 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:53:15.507208 augenrules[2487]: No rules Nov 4 23:53:15.508157 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:53:15.508589 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:53:16.059601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:17.786052 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:53:17.788315 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:53:23.652669 ldconfig[2445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:53:23.662231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:53:23.667094 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:53:23.696115 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:53:23.697807 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:53:23.701698 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:53:23.704573 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:53:23.706117 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:53:23.708701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:53:23.711657 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:53:23.714547 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:53:23.718550 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:53:23.718584 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:53:23.719922 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:53:23.736601 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:53:23.739436 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:53:23.757835 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:53:23.759509 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:53:23.762548 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:53:23.773956 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:53:23.792248 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:53:23.839345 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:53:23.843306 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:53:23.845531 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:53:23.846702 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:53:23.846731 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:53:23.860242 systemd[1]: Starting chronyd.service - NTP client/server... Nov 4 23:53:23.864575 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:53:23.868398 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 23:53:23.873667 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:53:23.880117 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:53:23.887635 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:53:23.894271 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:53:23.896402 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:53:23.899802 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:53:23.903582 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 4 23:53:23.904699 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 4 23:53:23.905193 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 4 23:53:23.908582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:23.913663 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:53:23.917665 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:53:23.923780 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:53:23.926015 jq[2507]: false Nov 4 23:53:23.929685 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:53:23.937726 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:53:23.942314 KVP[2513]: KVP starting; pid is:2513 Nov 4 23:53:23.948498 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:53:23.952614 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:53:23.953060 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:53:23.954006 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:53:23.957235 KVP[2513]: KVP LIC Version: 3.1 Nov 4 23:53:23.958678 kernel: hv_utils: KVP IC version 4.0 Nov 4 23:53:23.961676 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:53:23.966374 extend-filesystems[2511]: Found /dev/nvme0n1p6 Nov 4 23:53:23.971638 google_oslogin_nss_cache[2512]: oslogin_cache_refresh[2512]: Refreshing passwd entry cache Nov 4 23:53:23.970058 oslogin_cache_refresh[2512]: Refreshing passwd entry cache Nov 4 23:53:23.977124 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:53:23.981268 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:53:23.981710 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:53:23.984133 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:53:23.984663 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:53:23.989026 google_oslogin_nss_cache[2512]: oslogin_cache_refresh[2512]: Failure getting users, quitting Nov 4 23:53:23.989022 oslogin_cache_refresh[2512]: Failure getting users, quitting Nov 4 23:53:23.989126 google_oslogin_nss_cache[2512]: oslogin_cache_refresh[2512]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:53:23.989126 google_oslogin_nss_cache[2512]: oslogin_cache_refresh[2512]: Refreshing group entry cache Nov 4 23:53:23.989038 oslogin_cache_refresh[2512]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:53:23.989077 oslogin_cache_refresh[2512]: Refreshing group entry cache Nov 4 23:53:23.994245 chronyd[2502]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 4 23:53:24.001864 jq[2525]: true Nov 4 23:53:24.008591 extend-filesystems[2511]: Found /dev/nvme0n1p9 Nov 4 23:53:24.015664 extend-filesystems[2511]: Checking size of /dev/nvme0n1p9 Nov 4 23:53:24.015366 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:53:24.019744 google_oslogin_nss_cache[2512]: oslogin_cache_refresh[2512]: Failure getting groups, quitting Nov 4 23:53:24.019744 google_oslogin_nss_cache[2512]: oslogin_cache_refresh[2512]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:53:24.010310 oslogin_cache_refresh[2512]: Failure getting groups, quitting Nov 4 23:53:24.016231 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:53:24.010320 oslogin_cache_refresh[2512]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:53:24.027137 (ntainerd)[2550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:53:24.037047 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:53:24.037913 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:53:24.041656 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:53:24.045854 jq[2552]: true Nov 4 23:53:24.055828 chronyd[2502]: Timezone right/UTC failed leap second check, ignoring Nov 4 23:53:24.056392 systemd[1]: Started chronyd.service - NTP client/server. Nov 4 23:53:24.055970 chronyd[2502]: Loaded seccomp filter (level 2) Nov 4 23:53:24.058528 update_engine[2524]: I20251104 23:53:24.058238 2524 main.cc:92] Flatcar Update Engine starting Nov 4 23:53:24.083631 extend-filesystems[2511]: Resized partition /dev/nvme0n1p9 Nov 4 23:53:24.106447 tar[2531]: linux-amd64/LICENSE Nov 4 23:53:24.106800 tar[2531]: linux-amd64/helm Nov 4 23:53:24.117056 extend-filesystems[2580]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:53:24.120269 systemd-logind[2523]: New seat seat0. Nov 4 23:53:24.153474 systemd-logind[2523]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 4 23:53:24.153917 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:53:24.166974 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Nov 4 23:53:24.187731 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Nov 4 23:53:24.220589 extend-filesystems[2580]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 4 23:53:24.220589 extend-filesystems[2580]: old_desc_blocks = 4, new_desc_blocks = 4 Nov 4 23:53:24.220589 extend-filesystems[2580]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Nov 4 23:53:24.220373 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:53:24.244648 bash[2589]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:53:24.244734 extend-filesystems[2511]: Resized filesystem in /dev/nvme0n1p9 Nov 4 23:53:24.224871 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:53:24.225076 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:53:24.242430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 23:53:24.356020 dbus-daemon[2505]: [system] SELinux support is enabled Nov 4 23:53:24.356220 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:53:24.366992 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:53:24.367029 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:53:24.372556 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:53:24.372582 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:53:24.384623 dbus-daemon[2505]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 4 23:53:24.388232 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:53:24.391257 update_engine[2524]: I20251104 23:53:24.391206 2524 update_check_scheduler.cc:74] Next update check in 11m45s Nov 4 23:53:24.402709 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:53:24.453539 sshd_keygen[2568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:53:24.477743 coreos-metadata[2504]: Nov 04 23:53:24.477 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 4 23:53:24.480971 coreos-metadata[2504]: Nov 04 23:53:24.480 INFO Fetch successful Nov 4 23:53:24.481815 coreos-metadata[2504]: Nov 04 23:53:24.481 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 4 23:53:24.483083 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:53:24.485259 coreos-metadata[2504]: Nov 04 23:53:24.485 INFO Fetch successful Nov 4 23:53:24.485986 coreos-metadata[2504]: Nov 04 23:53:24.485 INFO Fetching http://168.63.129.16/machine/caba2026-1644-4953-8661-7e1fd396cecb/b05a38a5%2Dd880%2D4bc2%2D98d4%2D265fcf5a99ad.%5Fci%2D4487.0.0%2Dn%2Da41d436e4a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 4 23:53:24.487893 coreos-metadata[2504]: Nov 04 23:53:24.487 INFO Fetch successful Nov 4 23:53:24.488550 coreos-metadata[2504]: Nov 04 23:53:24.488 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 4 23:53:24.490504 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:53:24.495864 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 4 23:53:24.505348 coreos-metadata[2504]: Nov 04 23:53:24.505 INFO Fetch successful Nov 4 23:53:24.534836 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:53:24.536229 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:53:24.545110 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:53:24.558954 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 23:53:24.562962 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:53:24.567612 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 4 23:53:24.586116 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:53:24.590196 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:53:24.597796 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:53:24.600039 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:53:24.744409 locksmithd[2610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:53:24.814050 tar[2531]: linux-amd64/README.md Nov 4 23:53:24.833781 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:53:25.032313 containerd[2550]: time="2025-11-04T23:53:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:53:25.033839 containerd[2550]: time="2025-11-04T23:53:25.033805911Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:53:25.041858 containerd[2550]: time="2025-11-04T23:53:25.041823477Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.385µs" Nov 4 23:53:25.041858 containerd[2550]: time="2025-11-04T23:53:25.041848784Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:53:25.041951 containerd[2550]: time="2025-11-04T23:53:25.041866866Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:53:25.042024 containerd[2550]: time="2025-11-04T23:53:25.042005156Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:53:25.042050 containerd[2550]: time="2025-11-04T23:53:25.042024324Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:53:25.042072 containerd[2550]: time="2025-11-04T23:53:25.042046488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042106 containerd[2550]: time="2025-11-04T23:53:25.042092730Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042106 containerd[2550]: time="2025-11-04T23:53:25.042102894Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042287 containerd[2550]: time="2025-11-04T23:53:25.042268074Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042287 containerd[2550]: time="2025-11-04T23:53:25.042279659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042337 containerd[2550]: time="2025-11-04T23:53:25.042290247Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042337 containerd[2550]: time="2025-11-04T23:53:25.042298725Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042376 containerd[2550]: time="2025-11-04T23:53:25.042356800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042545 containerd[2550]: time="2025-11-04T23:53:25.042526568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042577 containerd[2550]: time="2025-11-04T23:53:25.042550607Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:53:25.042577 containerd[2550]: time="2025-11-04T23:53:25.042560776Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:53:25.042617 containerd[2550]: time="2025-11-04T23:53:25.042586809Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:53:25.042770 containerd[2550]: time="2025-11-04T23:53:25.042753217Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:53:25.042815 containerd[2550]: time="2025-11-04T23:53:25.042800476Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:53:25.058085 containerd[2550]: time="2025-11-04T23:53:25.058048402Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:53:25.058146 containerd[2550]: time="2025-11-04T23:53:25.058092969Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:53:25.058146 containerd[2550]: time="2025-11-04T23:53:25.058108948Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:53:25.058146 containerd[2550]: time="2025-11-04T23:53:25.058121790Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:53:25.058146 containerd[2550]: time="2025-11-04T23:53:25.058135835Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:53:25.058237 containerd[2550]: time="2025-11-04T23:53:25.058146291Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:53:25.058237 containerd[2550]: time="2025-11-04T23:53:25.058158182Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:53:25.058237 containerd[2550]: time="2025-11-04T23:53:25.058169456Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:53:25.058237 containerd[2550]: time="2025-11-04T23:53:25.058182007Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:53:25.058237 containerd[2550]: time="2025-11-04T23:53:25.058194226Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:53:25.058237 containerd[2550]: time="2025-11-04T23:53:25.058203470Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:53:25.058237 containerd[2550]: time="2025-11-04T23:53:25.058215854Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:53:25.059271 containerd[2550]: time="2025-11-04T23:53:25.059254280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:53:25.059311 containerd[2550]: time="2025-11-04T23:53:25.059278993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:53:25.059311 containerd[2550]: time="2025-11-04T23:53:25.059294275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:53:25.059311 containerd[2550]: time="2025-11-04T23:53:25.059305045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:53:25.059390 containerd[2550]: time="2025-11-04T23:53:25.059316089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:53:25.059390 containerd[2550]: time="2025-11-04T23:53:25.059326964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:53:25.059390 containerd[2550]: time="2025-11-04T23:53:25.059340248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:53:25.059390 containerd[2550]: time="2025-11-04T23:53:25.059350707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:53:25.059390 containerd[2550]: time="2025-11-04T23:53:25.059362863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:53:25.059390 containerd[2550]: time="2025-11-04T23:53:25.059381252Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:53:25.059546 containerd[2550]: time="2025-11-04T23:53:25.059393701Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:53:25.059546 containerd[2550]: time="2025-11-04T23:53:25.059459871Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:53:25.059546 containerd[2550]: time="2025-11-04T23:53:25.059472382Z" level=info msg="Start snapshots syncer" Nov 4 23:53:25.059546 containerd[2550]: time="2025-11-04T23:53:25.059515072Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:53:25.059787 containerd[2550]: time="2025-11-04T23:53:25.059758218Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:53:25.059918 containerd[2550]: time="2025-11-04T23:53:25.059803781Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:53:25.059918 containerd[2550]: time="2025-11-04T23:53:25.059865333Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:53:25.059963 containerd[2550]: time="2025-11-04T23:53:25.059953968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:53:25.059983 containerd[2550]: time="2025-11-04T23:53:25.059972505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:53:25.060003 containerd[2550]: time="2025-11-04T23:53:25.059983602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:53:25.060003 containerd[2550]: time="2025-11-04T23:53:25.059994081Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:53:25.060046 containerd[2550]: time="2025-11-04T23:53:25.060005302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:53:25.060046 containerd[2550]: time="2025-11-04T23:53:25.060015849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:53:25.060046 containerd[2550]: time="2025-11-04T23:53:25.060032232Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:53:25.060104 containerd[2550]: time="2025-11-04T23:53:25.060054650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:53:25.060104 containerd[2550]: time="2025-11-04T23:53:25.060065467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:53:25.060104 containerd[2550]: time="2025-11-04T23:53:25.060076325Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:53:25.060104 containerd[2550]: time="2025-11-04T23:53:25.060097677Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060109729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060119082Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060128859Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060136817Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060147340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060157913Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060174222Z" level=info msg="runtime interface created" Nov 4 23:53:25.060184 containerd[2550]: time="2025-11-04T23:53:25.060179181Z" level=info msg="created NRI interface" Nov 4 23:53:25.060331 containerd[2550]: time="2025-11-04T23:53:25.060187214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:53:25.060331 containerd[2550]: time="2025-11-04T23:53:25.060198151Z" level=info msg="Connect containerd service" Nov 4 23:53:25.060331 containerd[2550]: time="2025-11-04T23:53:25.060220196Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:53:25.061020 containerd[2550]: time="2025-11-04T23:53:25.060985225Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:53:25.585630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:53:25.588890 containerd[2550]: time="2025-11-04T23:53:25.588848041Z" level=info msg="Start subscribing containerd event" Nov 4 23:53:25.589027 containerd[2550]: time="2025-11-04T23:53:25.588999363Z" level=info msg="Start recovering state" Nov 4 23:53:25.589161 containerd[2550]: time="2025-11-04T23:53:25.589135970Z" level=info msg="Start event monitor" Nov 4 23:53:25.589161 containerd[2550]: time="2025-11-04T23:53:25.589154284Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:53:25.589223 containerd[2550]: time="2025-11-04T23:53:25.589161763Z" level=info msg="Start streaming server" Nov 4 23:53:25.589223 containerd[2550]: time="2025-11-04T23:53:25.589171163Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:53:25.589223 containerd[2550]: time="2025-11-04T23:53:25.589178466Z" level=info msg="runtime interface starting up..." Nov 4 23:53:25.589223 containerd[2550]: time="2025-11-04T23:53:25.589184496Z" level=info msg="starting plugins..." Nov 4 23:53:25.589223 containerd[2550]: time="2025-11-04T23:53:25.589197077Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:53:25.589328 containerd[2550]: time="2025-11-04T23:53:25.589009835Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:53:25.589328 containerd[2550]: time="2025-11-04T23:53:25.589309776Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:53:25.589367 containerd[2550]: time="2025-11-04T23:53:25.589355744Z" level=info msg="containerd successfully booted in 0.557441s" Nov 4 23:53:25.590872 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:53:25.593654 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:53:25.597703 systemd[1]: Startup finished in 4.341s (kernel) + 13.283s (initrd) + 20.586s (userspace) = 38.211s. Nov 4 23:53:25.599826 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:53:26.268770 kubelet[2670]: E1104 23:53:26.268696 2670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:53:26.270828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:53:26.270970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:53:26.271426 systemd[1]: kubelet.service: Consumed 1.004s CPU time, 268.1M memory peak. Nov 4 23:53:26.465756 login[2638]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 4 23:53:26.479991 login[2639]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 23:53:26.485162 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:53:26.486881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:53:26.494521 systemd-logind[2523]: New session 1 of user core. Nov 4 23:53:26.514670 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:53:26.516967 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:53:26.542986 (systemd)[2684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:53:26.544755 systemd-logind[2523]: New session c1 of user core. Nov 4 23:53:26.836019 waagent[2635]: 2025-11-04T23:53:26.835904Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 4 23:53:26.838495 waagent[2635]: 2025-11-04T23:53:26.838210Z INFO Daemon Daemon OS: flatcar 4487.0.0 Nov 4 23:53:26.840304 waagent[2635]: 2025-11-04T23:53:26.840267Z INFO Daemon Daemon Python: 3.11.13 Nov 4 23:53:26.842570 waagent[2635]: 2025-11-04T23:53:26.842530Z INFO Daemon Daemon Run daemon Nov 4 23:53:26.845545 waagent[2635]: 2025-11-04T23:53:26.844547Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4487.0.0' Nov 4 23:53:26.845831 waagent[2635]: 2025-11-04T23:53:26.844850Z INFO Daemon Daemon Using waagent for provisioning Nov 4 23:53:26.846060 waagent[2635]: 2025-11-04T23:53:26.846028Z INFO Daemon Daemon Activate resource disk Nov 4 23:53:26.852152 waagent[2635]: 2025-11-04T23:53:26.851329Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 4 23:53:26.856756 waagent[2635]: 2025-11-04T23:53:26.856717Z INFO Daemon Daemon Found device: None Nov 4 23:53:26.858751 waagent[2635]: 2025-11-04T23:53:26.858716Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 4 23:53:26.860706 waagent[2635]: 2025-11-04T23:53:26.860677Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 4 23:53:26.866002 waagent[2635]: 2025-11-04T23:53:26.865960Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 4 23:53:26.868408 waagent[2635]: 2025-11-04T23:53:26.868369Z INFO Daemon Daemon Running default provisioning handler Nov 4 23:53:26.871252 systemd[2684]: Queued start job for default target default.target. Nov 4 23:53:26.876425 systemd[2684]: Created slice app.slice - User Application Slice. Nov 4 23:53:26.876450 systemd[2684]: Reached target paths.target - Paths. Nov 4 23:53:26.877323 waagent[2635]: 2025-11-04T23:53:26.876573Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 4 23:53:26.876506 systemd[2684]: Reached target timers.target - Timers. Nov 4 23:53:26.879089 waagent[2635]: 2025-11-04T23:53:26.879040Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 4 23:53:26.879462 waagent[2635]: 2025-11-04T23:53:26.879380Z INFO Daemon Daemon cloud-init is enabled: False Nov 4 23:53:26.879801 waagent[2635]: 2025-11-04T23:53:26.879779Z INFO Daemon Daemon Copying ovf-env.xml Nov 4 23:53:26.889651 systemd[2684]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:53:26.903763 systemd[2684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:53:26.903863 systemd[2684]: Reached target sockets.target - Sockets. Nov 4 23:53:26.903905 systemd[2684]: Reached target basic.target - Basic System. Nov 4 23:53:26.903940 systemd[2684]: Reached target default.target - Main User Target. Nov 4 23:53:26.903963 systemd[2684]: Startup finished in 354ms. Nov 4 23:53:26.904160 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:53:26.911602 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:53:26.985163 waagent[2635]: 2025-11-04T23:53:26.985116Z INFO Daemon Daemon Successfully mounted dvd Nov 4 23:53:27.011813 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 4 23:53:27.013625 waagent[2635]: 2025-11-04T23:53:27.013579Z INFO Daemon Daemon Detect protocol endpoint Nov 4 23:53:27.014910 waagent[2635]: 2025-11-04T23:53:27.014838Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 4 23:53:27.017333 waagent[2635]: 2025-11-04T23:53:27.015382Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 4 23:53:27.017333 waagent[2635]: 2025-11-04T23:53:27.015765Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 4 23:53:27.017333 waagent[2635]: 2025-11-04T23:53:27.015920Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 4 23:53:27.017333 waagent[2635]: 2025-11-04T23:53:27.016169Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 4 23:53:27.044682 waagent[2635]: 2025-11-04T23:53:27.044650Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 4 23:53:27.046791 waagent[2635]: 2025-11-04T23:53:27.045546Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 4 23:53:27.046791 waagent[2635]: 2025-11-04T23:53:27.045624Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 4 23:53:27.156183 waagent[2635]: 2025-11-04T23:53:27.156075Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 4 23:53:27.157223 waagent[2635]: 2025-11-04T23:53:27.156933Z INFO Daemon Daemon Forcing an update of the goal state. Nov 4 23:53:27.164872 waagent[2635]: 2025-11-04T23:53:27.164834Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 4 23:53:27.183424 waagent[2635]: 2025-11-04T23:53:27.183384Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 4 23:53:27.185638 waagent[2635]: 2025-11-04T23:53:27.184014Z INFO Daemon Nov 4 23:53:27.185638 waagent[2635]: 2025-11-04T23:53:27.184344Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ab89b623-d99a-426a-a094-bb0778f9cc22 eTag: 4275051529311147731 source: Fabric] Nov 4 23:53:27.185638 waagent[2635]: 2025-11-04T23:53:27.184631Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 4 23:53:27.185638 waagent[2635]: 2025-11-04T23:53:27.184874Z INFO Daemon Nov 4 23:53:27.185638 waagent[2635]: 2025-11-04T23:53:27.185087Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 4 23:53:27.191643 waagent[2635]: 2025-11-04T23:53:27.188872Z INFO Daemon Daemon Downloading artifacts profile blob Nov 4 23:53:27.263568 waagent[2635]: 2025-11-04T23:53:27.263507Z INFO Daemon Downloaded certificate {'thumbprint': 'D5DE9B2412A62EC7759FEFBCCB123A043E950A5B', 'hasPrivateKey': True} Nov 4 23:53:27.266257 waagent[2635]: 2025-11-04T23:53:27.264407Z INFO Daemon Fetch goal state completed Nov 4 23:53:27.271959 waagent[2635]: 2025-11-04T23:53:27.271910Z INFO Daemon Daemon Starting provisioning Nov 4 23:53:27.272718 waagent[2635]: 2025-11-04T23:53:27.272511Z INFO Daemon Daemon Handle ovf-env.xml. Nov 4 23:53:27.272760 waagent[2635]: 2025-11-04T23:53:27.272724Z INFO Daemon Daemon Set hostname [ci-4487.0.0-n-a41d436e4a] Nov 4 23:53:27.287347 waagent[2635]: 2025-11-04T23:53:27.287310Z INFO Daemon Daemon Publish hostname [ci-4487.0.0-n-a41d436e4a] Nov 4 23:53:27.287885 waagent[2635]: 2025-11-04T23:53:27.287722Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 4 23:53:27.288578 waagent[2635]: 2025-11-04T23:53:27.288015Z INFO Daemon Daemon Primary interface is [eth0] Nov 4 23:53:27.307945 systemd-networkd[2190]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:27.307953 systemd-networkd[2190]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:53:27.308010 systemd-networkd[2190]: eth0: DHCP lease lost Nov 4 23:53:27.323803 waagent[2635]: 2025-11-04T23:53:27.323761Z INFO Daemon Daemon Create user account if not exists Nov 4 23:53:27.325394 waagent[2635]: 2025-11-04T23:53:27.325350Z INFO Daemon Daemon User core already exists, skip useradd Nov 4 23:53:27.326371 waagent[2635]: 2025-11-04T23:53:27.325872Z INFO Daemon Daemon Configure sudoer Nov 4 23:53:27.330819 waagent[2635]: 2025-11-04T23:53:27.330777Z INFO Daemon Daemon Configure sshd Nov 4 23:53:27.335326 waagent[2635]: 2025-11-04T23:53:27.335284Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 4 23:53:27.339607 waagent[2635]: 2025-11-04T23:53:27.335842Z INFO Daemon Daemon Deploy ssh public key. Nov 4 23:53:27.344536 systemd-networkd[2190]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 4 23:53:27.466052 login[2638]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 23:53:27.470691 systemd-logind[2523]: New session 2 of user core. Nov 4 23:53:27.479642 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:53:28.457137 waagent[2635]: 2025-11-04T23:53:28.457078Z INFO Daemon Daemon Provisioning complete Nov 4 23:53:28.466545 waagent[2635]: 2025-11-04T23:53:28.466511Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 4 23:53:28.467376 waagent[2635]: 2025-11-04T23:53:28.467089Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 4 23:53:28.472258 waagent[2635]: 2025-11-04T23:53:28.467419Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 4 23:53:28.572384 waagent[2733]: 2025-11-04T23:53:28.572318Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 4 23:53:28.572659 waagent[2733]: 2025-11-04T23:53:28.572414Z INFO ExtHandler ExtHandler OS: flatcar 4487.0.0 Nov 4 23:53:28.572659 waagent[2733]: 2025-11-04T23:53:28.572456Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 4 23:53:28.572659 waagent[2733]: 2025-11-04T23:53:28.572516Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 4 23:53:28.642267 waagent[2733]: 2025-11-04T23:53:28.642218Z INFO ExtHandler ExtHandler Distro: flatcar-4487.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 4 23:53:28.642409 waagent[2733]: 2025-11-04T23:53:28.642386Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 4 23:53:28.642456 waagent[2733]: 2025-11-04T23:53:28.642437Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 4 23:53:28.655274 waagent[2733]: 2025-11-04T23:53:28.655222Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 4 23:53:28.660149 waagent[2733]: 2025-11-04T23:53:28.660111Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 4 23:53:28.660492 waagent[2733]: 2025-11-04T23:53:28.660460Z INFO ExtHandler Nov 4 23:53:28.660573 waagent[2733]: 2025-11-04T23:53:28.660534Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0b336db0-4cad-43c9-9bde-fc01c7f5bc21 eTag: 4275051529311147731 source: Fabric] Nov 4 23:53:28.660784 waagent[2733]: 2025-11-04T23:53:28.660761Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 4 23:53:28.661131 waagent[2733]: 2025-11-04T23:53:28.661105Z INFO ExtHandler Nov 4 23:53:28.661169 waagent[2733]: 2025-11-04T23:53:28.661146Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 4 23:53:28.663842 waagent[2733]: 2025-11-04T23:53:28.663813Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 4 23:53:28.739049 waagent[2733]: 2025-11-04T23:53:28.738967Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5DE9B2412A62EC7759FEFBCCB123A043E950A5B', 'hasPrivateKey': True} Nov 4 23:53:28.739349 waagent[2733]: 2025-11-04T23:53:28.739320Z INFO ExtHandler Fetch goal state completed Nov 4 23:53:28.752546 waagent[2733]: 2025-11-04T23:53:28.752500Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 4 23:53:28.756711 waagent[2733]: 2025-11-04T23:53:28.756669Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2733 Nov 4 23:53:28.756835 waagent[2733]: 2025-11-04T23:53:28.756796Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 4 23:53:28.757070 waagent[2733]: 2025-11-04T23:53:28.757048Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 4 23:53:28.758099 waagent[2733]: 2025-11-04T23:53:28.758066Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 4 23:53:28.758367 waagent[2733]: 2025-11-04T23:53:28.758343Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 4 23:53:28.758467 waagent[2733]: 2025-11-04T23:53:28.758448Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 4 23:53:28.758855 waagent[2733]: 2025-11-04T23:53:28.758832Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 4 23:53:28.826093 waagent[2733]: 2025-11-04T23:53:28.826067Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 4 23:53:28.826229 waagent[2733]: 2025-11-04T23:53:28.826208Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 4 23:53:28.831968 waagent[2733]: 2025-11-04T23:53:28.831597Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 4 23:53:28.836982 systemd[1]: Reload requested from client PID 2748 ('systemctl') (unit waagent.service)... Nov 4 23:53:28.836997 systemd[1]: Reloading... Nov 4 23:53:28.919499 zram_generator::config[2786]: No configuration found. Nov 4 23:53:29.073788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#300 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 4 23:53:29.108534 systemd[1]: Reloading finished in 271 ms. Nov 4 23:53:29.138011 waagent[2733]: 2025-11-04T23:53:29.137689Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 4 23:53:29.138011 waagent[2733]: 2025-11-04T23:53:29.137830Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 4 23:53:29.549461 waagent[2733]: 2025-11-04T23:53:29.549391Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 4 23:53:29.549746 waagent[2733]: 2025-11-04T23:53:29.549718Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 4 23:53:29.550374 waagent[2733]: 2025-11-04T23:53:29.550339Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 4 23:53:29.550727 waagent[2733]: 2025-11-04T23:53:29.550680Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 4 23:53:29.551004 waagent[2733]: 2025-11-04T23:53:29.550961Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 4 23:53:29.551122 waagent[2733]: 2025-11-04T23:53:29.551008Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 4 23:53:29.551319 waagent[2733]: 2025-11-04T23:53:29.551296Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 4 23:53:29.551385 waagent[2733]: 2025-11-04T23:53:29.551351Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 4 23:53:29.551470 waagent[2733]: 2025-11-04T23:53:29.551438Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 4 23:53:29.551653 waagent[2733]: 2025-11-04T23:53:29.551624Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 4 23:53:29.551786 waagent[2733]: 2025-11-04T23:53:29.551761Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 4 23:53:29.551925 waagent[2733]: 2025-11-04T23:53:29.551892Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 4 23:53:29.551925 waagent[2733]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 4 23:53:29.551925 waagent[2733]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 4 23:53:29.551925 waagent[2733]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 4 23:53:29.551925 waagent[2733]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 4 23:53:29.551925 waagent[2733]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 4 23:53:29.551925 waagent[2733]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 4 23:53:29.552242 waagent[2733]: 2025-11-04T23:53:29.552214Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 4 23:53:29.552342 waagent[2733]: 2025-11-04T23:53:29.552321Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 4 23:53:29.552797 waagent[2733]: 2025-11-04T23:53:29.552766Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 4 23:53:29.552928 waagent[2733]: 2025-11-04T23:53:29.552902Z INFO EnvHandler ExtHandler Configure routes Nov 4 23:53:29.552972 waagent[2733]: 2025-11-04T23:53:29.552955Z INFO EnvHandler ExtHandler Gateway:None Nov 4 23:53:29.553002 waagent[2733]: 2025-11-04T23:53:29.552989Z INFO EnvHandler ExtHandler Routes:None Nov 4 23:53:29.568089 waagent[2733]: 2025-11-04T23:53:29.568055Z INFO ExtHandler ExtHandler Nov 4 23:53:29.568154 waagent[2733]: 2025-11-04T23:53:29.568117Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5c032428-4f45-4b0c-aa0c-031dbb0c4cf7 correlation c30e39b0-1b40-4341-9938-20cc68e43e4c created: 2025-11-04T23:52:20.189835Z] Nov 4 23:53:29.568398 waagent[2733]: 2025-11-04T23:53:29.568373Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 4 23:53:29.568834 waagent[2733]: 2025-11-04T23:53:29.568810Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 4 23:53:29.596440 waagent[2733]: 2025-11-04T23:53:29.596395Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 4 23:53:29.596440 waagent[2733]: Try `iptables -h' or 'iptables --help' for more information.) Nov 4 23:53:29.596891 waagent[2733]: 2025-11-04T23:53:29.596805Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BB43AC46-9986-4732-A019-F78F78600A2C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 4 23:53:29.640647 waagent[2733]: 2025-11-04T23:53:29.640579Z INFO MonitorHandler ExtHandler Network interfaces: Nov 4 23:53:29.640647 waagent[2733]: Executing ['ip', '-a', '-o', 'link']: Nov 4 23:53:29.640647 waagent[2733]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 4 23:53:29.640647 waagent[2733]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fa:da:d3 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e52fadad3 Nov 4 23:53:29.640647 waagent[2733]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fa:da:d3 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 4 23:53:29.640647 waagent[2733]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 4 23:53:29.640647 waagent[2733]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 4 23:53:29.640647 waagent[2733]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 4 23:53:29.640647 waagent[2733]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 4 23:53:29.640647 waagent[2733]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 4 23:53:29.640647 waagent[2733]: 2: eth0 inet6 fe80::7e1e:52ff:fefa:dad3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 4 23:53:29.706549 waagent[2733]: 2025-11-04T23:53:29.706429Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 4 23:53:29.706549 waagent[2733]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:53:29.706549 waagent[2733]: pkts bytes target prot opt in out source destination Nov 4 23:53:29.706549 waagent[2733]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:53:29.706549 waagent[2733]: pkts bytes target prot opt in out source destination Nov 4 23:53:29.706549 waagent[2733]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:53:29.706549 waagent[2733]: pkts bytes target prot opt in out source destination Nov 4 23:53:29.706549 waagent[2733]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 4 23:53:29.706549 waagent[2733]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 4 23:53:29.706549 waagent[2733]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 4 23:53:29.709603 waagent[2733]: 2025-11-04T23:53:29.709559Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 4 23:53:29.709603 waagent[2733]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:53:29.709603 waagent[2733]: pkts bytes target prot opt in out source destination Nov 4 23:53:29.709603 waagent[2733]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:53:29.709603 waagent[2733]: pkts bytes target prot opt in out source destination Nov 4 23:53:29.709603 waagent[2733]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:53:29.709603 waagent[2733]: pkts bytes target prot opt in out source destination Nov 4 23:53:29.709603 waagent[2733]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 4 23:53:29.709603 waagent[2733]: 4 587 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 4 23:53:29.709603 waagent[2733]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 4 23:53:36.412857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:53:36.414284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:36.696186 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:53:36.697363 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:41964.service - OpenSSH per-connection server daemon (10.200.16.10:41964). Nov 4 23:53:37.118557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:53:37.123680 (kubelet)[2890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:53:37.163900 kubelet[2890]: E1104 23:53:37.163869 2890 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:53:37.167030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:53:37.167135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:53:37.167420 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.9M memory peak. Nov 4 23:53:37.533145 sshd[2882]: Accepted publickey for core from 10.200.16.10 port 41964 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:53:37.534241 sshd-session[2882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:37.538573 systemd-logind[2523]: New session 3 of user core. Nov 4 23:53:37.544626 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:53:38.089516 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:41978.service - OpenSSH per-connection server daemon (10.200.16.10:41978). Nov 4 23:53:38.715624 sshd[2900]: Accepted publickey for core from 10.200.16.10 port 41978 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:53:38.716774 sshd-session[2900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:38.720394 systemd-logind[2523]: New session 4 of user core. Nov 4 23:53:38.731617 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:53:39.159076 sshd[2903]: Connection closed by 10.200.16.10 port 41978 Nov 4 23:53:39.159824 sshd-session[2900]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:39.163166 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:41978.service: Deactivated successfully. Nov 4 23:53:39.164572 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:53:39.165243 systemd-logind[2523]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:53:39.166343 systemd-logind[2523]: Removed session 4. Nov 4 23:53:39.271028 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:41984.service - OpenSSH per-connection server daemon (10.200.16.10:41984). Nov 4 23:53:39.900790 sshd[2909]: Accepted publickey for core from 10.200.16.10 port 41984 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:53:39.901945 sshd-session[2909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:39.906324 systemd-logind[2523]: New session 5 of user core. Nov 4 23:53:39.912640 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:53:40.342111 sshd[2912]: Connection closed by 10.200.16.10 port 41984 Nov 4 23:53:40.342643 sshd-session[2909]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:40.345983 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:41984.service: Deactivated successfully. Nov 4 23:53:40.347421 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:53:40.348422 systemd-logind[2523]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:53:40.349172 systemd-logind[2523]: Removed session 5. Nov 4 23:53:40.451967 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:34052.service - OpenSSH per-connection server daemon (10.200.16.10:34052). Nov 4 23:53:41.209122 sshd[2918]: Accepted publickey for core from 10.200.16.10 port 34052 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:53:41.210228 sshd-session[2918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:41.214674 systemd-logind[2523]: New session 6 of user core. Nov 4 23:53:41.220635 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:53:41.653501 sshd[2921]: Connection closed by 10.200.16.10 port 34052 Nov 4 23:53:41.654086 sshd-session[2918]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:41.657361 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:34052.service: Deactivated successfully. Nov 4 23:53:41.658800 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:53:41.659522 systemd-logind[2523]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:53:41.660615 systemd-logind[2523]: Removed session 6. Nov 4 23:53:41.784019 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:34058.service - OpenSSH per-connection server daemon (10.200.16.10:34058). Nov 4 23:53:42.424474 sshd[2927]: Accepted publickey for core from 10.200.16.10 port 34058 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:53:42.425567 sshd-session[2927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:42.429539 systemd-logind[2523]: New session 7 of user core. Nov 4 23:53:42.439620 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:53:42.987067 sudo[2931]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:53:42.987299 sudo[2931]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:43.018232 sudo[2931]: pam_unix(sudo:session): session closed for user root Nov 4 23:53:43.125558 sshd[2930]: Connection closed by 10.200.16.10 port 34058 Nov 4 23:53:43.127397 sshd-session[2927]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:43.130998 systemd-logind[2523]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:53:43.131177 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:34058.service: Deactivated successfully. Nov 4 23:53:43.132817 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:53:43.134044 systemd-logind[2523]: Removed session 7. Nov 4 23:53:43.239212 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:34070.service - OpenSSH per-connection server daemon (10.200.16.10:34070). Nov 4 23:53:43.873200 sshd[2937]: Accepted publickey for core from 10.200.16.10 port 34070 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:53:43.874327 sshd-session[2937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:43.878567 systemd-logind[2523]: New session 8 of user core. Nov 4 23:53:43.887620 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:53:44.217350 sudo[2942]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:53:44.217598 sudo[2942]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:44.224006 sudo[2942]: pam_unix(sudo:session): session closed for user root Nov 4 23:53:44.228799 sudo[2941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:53:44.229017 sudo[2941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:44.236647 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:53:44.266030 augenrules[2964]: No rules Nov 4 23:53:44.267036 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:53:44.267239 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:53:44.268902 sudo[2941]: pam_unix(sudo:session): session closed for user root Nov 4 23:53:44.371175 sshd[2940]: Connection closed by 10.200.16.10 port 34070 Nov 4 23:53:44.371607 sshd-session[2937]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:44.374754 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:34070.service: Deactivated successfully. Nov 4 23:53:44.376124 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:53:44.377085 systemd-logind[2523]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:53:44.378150 systemd-logind[2523]: Removed session 8. Nov 4 23:53:44.486202 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:34080.service - OpenSSH per-connection server daemon (10.200.16.10:34080). Nov 4 23:53:45.120497 sshd[2973]: Accepted publickey for core from 10.200.16.10 port 34080 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:53:45.121591 sshd-session[2973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:45.126183 systemd-logind[2523]: New session 9 of user core. Nov 4 23:53:45.131625 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:53:45.464885 sudo[2977]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:53:45.465118 sudo[2977]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:46.970326 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:53:46.976866 (dockerd)[2995]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:53:47.412887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:53:47.414417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:47.753234 dockerd[2995]: time="2025-11-04T23:53:47.753128034Z" level=info msg="Starting up" Nov 4 23:53:47.754032 dockerd[2995]: time="2025-11-04T23:53:47.753942979Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:53:47.765248 dockerd[2995]: time="2025-11-04T23:53:47.765208671Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:53:47.837576 chronyd[2502]: Selected source PHC0 Nov 4 23:53:48.069993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:53:48.078736 (kubelet)[3022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:53:48.086684 dockerd[2995]: time="2025-11-04T23:53:48.086635122Z" level=info msg="Loading containers: start." Nov 4 23:53:48.115196 kubelet[3022]: E1104 23:53:48.115160 3022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:53:48.117041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:53:48.117208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:53:48.117583 systemd[1]: kubelet.service: Consumed 133ms CPU time, 110.8M memory peak. Nov 4 23:53:48.176500 kernel: Initializing XFRM netlink socket Nov 4 23:53:48.685360 systemd-networkd[2190]: docker0: Link UP Nov 4 23:53:48.705689 dockerd[2995]: time="2025-11-04T23:53:48.705653907Z" level=info msg="Loading containers: done." Nov 4 23:53:48.801946 dockerd[2995]: time="2025-11-04T23:53:48.801879724Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:53:48.802311 dockerd[2995]: time="2025-11-04T23:53:48.801990415Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:53:48.802311 dockerd[2995]: time="2025-11-04T23:53:48.802071888Z" level=info msg="Initializing buildkit" Nov 4 23:53:48.859555 dockerd[2995]: time="2025-11-04T23:53:48.859517608Z" level=info msg="Completed buildkit initialization" Nov 4 23:53:48.865759 dockerd[2995]: time="2025-11-04T23:53:48.865700963Z" level=info msg="Daemon has completed initialization" Nov 4 23:53:48.865946 dockerd[2995]: time="2025-11-04T23:53:48.865913867Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:53:48.866554 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:53:50.027827 containerd[2550]: time="2025-11-04T23:53:50.027792769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 23:53:50.756061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885970254.mount: Deactivated successfully. Nov 4 23:53:52.157388 containerd[2550]: time="2025-11-04T23:53:52.157337375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:52.163259 containerd[2550]: time="2025-11-04T23:53:52.163228020Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Nov 4 23:53:52.166186 containerd[2550]: time="2025-11-04T23:53:52.166144467Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:52.170003 containerd[2550]: time="2025-11-04T23:53:52.169757280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:52.170505 containerd[2550]: time="2025-11-04T23:53:52.170465840Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.142634799s" Nov 4 23:53:52.170551 containerd[2550]: time="2025-11-04T23:53:52.170516163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 23:53:52.171234 containerd[2550]: time="2025-11-04T23:53:52.171210503Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 23:53:53.970001 containerd[2550]: time="2025-11-04T23:53:53.969957027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:53.972341 containerd[2550]: time="2025-11-04T23:53:53.972303651Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Nov 4 23:53:53.976742 containerd[2550]: time="2025-11-04T23:53:53.976689915Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:53.981512 containerd[2550]: time="2025-11-04T23:53:53.981309087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:53.982218 containerd[2550]: time="2025-11-04T23:53:53.982195466Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.810960104s" Nov 4 23:53:53.982269 containerd[2550]: time="2025-11-04T23:53:53.982224320Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 23:53:53.982914 containerd[2550]: time="2025-11-04T23:53:53.982891140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 23:53:55.176869 containerd[2550]: time="2025-11-04T23:53:55.176822136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:55.180255 containerd[2550]: time="2025-11-04T23:53:55.180110427Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Nov 4 23:53:55.183321 containerd[2550]: time="2025-11-04T23:53:55.183295937Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:55.202739 containerd[2550]: time="2025-11-04T23:53:55.202707347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:55.203591 containerd[2550]: time="2025-11-04T23:53:55.203452014Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.220532783s" Nov 4 23:53:55.203591 containerd[2550]: time="2025-11-04T23:53:55.203498758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 23:53:55.204072 containerd[2550]: time="2025-11-04T23:53:55.204057203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 23:53:56.127816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435727978.mount: Deactivated successfully. Nov 4 23:53:56.521518 containerd[2550]: time="2025-11-04T23:53:56.521462798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:56.525009 containerd[2550]: time="2025-11-04T23:53:56.524926386Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Nov 4 23:53:56.533862 containerd[2550]: time="2025-11-04T23:53:56.533836193Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:56.537822 containerd[2550]: time="2025-11-04T23:53:56.537781426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:56.538290 containerd[2550]: time="2025-11-04T23:53:56.538113607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.334004405s" Nov 4 23:53:56.538290 containerd[2550]: time="2025-11-04T23:53:56.538144421Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 23:53:56.538672 containerd[2550]: time="2025-11-04T23:53:56.538650506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 23:53:57.182600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887318794.mount: Deactivated successfully. Nov 4 23:53:58.162836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:53:58.166708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:58.684590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:53:58.689824 (kubelet)[3351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:53:58.714155 containerd[2550]: time="2025-11-04T23:53:58.714106478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:58.717747 containerd[2550]: time="2025-11-04T23:53:58.717573342Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Nov 4 23:53:58.721042 containerd[2550]: time="2025-11-04T23:53:58.721009776Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:58.726054 kubelet[3351]: E1104 23:53:58.725973 3351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:53:58.726741 containerd[2550]: time="2025-11-04T23:53:58.726442028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:58.727276 containerd[2550]: time="2025-11-04T23:53:58.727252186Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.188575983s" Nov 4 23:53:58.727322 containerd[2550]: time="2025-11-04T23:53:58.727286232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 23:53:58.728258 containerd[2550]: time="2025-11-04T23:53:58.728238992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:53:58.729140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:53:58.729257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:53:58.729545 systemd[1]: kubelet.service: Consumed 139ms CPU time, 108.6M memory peak. Nov 4 23:53:59.357468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413058498.mount: Deactivated successfully. Nov 4 23:53:59.379396 containerd[2550]: time="2025-11-04T23:53:59.379337760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:53:59.381935 containerd[2550]: time="2025-11-04T23:53:59.381908186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 4 23:53:59.385055 containerd[2550]: time="2025-11-04T23:53:59.385012114Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:53:59.388744 containerd[2550]: time="2025-11-04T23:53:59.388698530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:53:59.389430 containerd[2550]: time="2025-11-04T23:53:59.389071745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 660.733383ms" Nov 4 23:53:59.389430 containerd[2550]: time="2025-11-04T23:53:59.389101924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:53:59.389692 containerd[2550]: time="2025-11-04T23:53:59.389668725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 23:53:59.990149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899982085.mount: Deactivated successfully. Nov 4 23:54:02.006575 containerd[2550]: time="2025-11-04T23:54:02.006530517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:02.009927 containerd[2550]: time="2025-11-04T23:54:02.009890178Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Nov 4 23:54:02.013981 containerd[2550]: time="2025-11-04T23:54:02.013951802Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:02.019833 containerd[2550]: time="2025-11-04T23:54:02.019168816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:02.022371 containerd[2550]: time="2025-11-04T23:54:02.022334525Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.632642111s" Nov 4 23:54:02.022443 containerd[2550]: time="2025-11-04T23:54:02.022376253Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 23:54:02.274534 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 4 23:54:04.338462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:04.338642 systemd[1]: kubelet.service: Consumed 139ms CPU time, 108.6M memory peak. Nov 4 23:54:04.340672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:04.363613 systemd[1]: Reload requested from client PID 3445 ('systemctl') (unit session-9.scope)... Nov 4 23:54:04.363729 systemd[1]: Reloading... Nov 4 23:54:04.455594 zram_generator::config[3496]: No configuration found. Nov 4 23:54:04.646333 systemd[1]: Reloading finished in 282 ms. Nov 4 23:54:04.682796 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:54:04.682870 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:54:04.683290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:04.683331 systemd[1]: kubelet.service: Consumed 69ms CPU time, 69.8M memory peak. Nov 4 23:54:04.685264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:05.349665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:05.355699 (kubelet)[3560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:54:05.391735 kubelet[3560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:05.391735 kubelet[3560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:54:05.391735 kubelet[3560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:05.392006 kubelet[3560]: I1104 23:54:05.391782 3560 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:54:05.543823 kubelet[3560]: I1104 23:54:05.543796 3560 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:54:05.543823 kubelet[3560]: I1104 23:54:05.543817 3560 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:54:05.544021 kubelet[3560]: I1104 23:54:05.544009 3560 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:54:05.574334 kubelet[3560]: I1104 23:54:05.574132 3560 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:54:05.574571 kubelet[3560]: E1104 23:54:05.574549 3560 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:54:05.583644 kubelet[3560]: I1104 23:54:05.583628 3560 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:54:05.587611 kubelet[3560]: I1104 23:54:05.587593 3560 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:54:05.587801 kubelet[3560]: I1104 23:54:05.587782 3560 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:54:05.587935 kubelet[3560]: I1104 23:54:05.587801 3560 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-a41d436e4a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:54:05.588051 kubelet[3560]: I1104 23:54:05.587939 3560 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:54:05.588051 kubelet[3560]: I1104 23:54:05.587949 3560 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:54:05.588051 kubelet[3560]: I1104 23:54:05.588048 3560 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:05.590078 kubelet[3560]: I1104 23:54:05.590063 3560 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:54:05.590145 kubelet[3560]: I1104 23:54:05.590101 3560 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:54:05.590145 kubelet[3560]: I1104 23:54:05.590130 3560 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:54:05.590145 kubelet[3560]: I1104 23:54:05.590145 3560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:54:05.596421 kubelet[3560]: E1104 23:54:05.596224 3560 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:54:05.596421 kubelet[3560]: E1104 23:54:05.596320 3560 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-a41d436e4a&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:54:05.596545 kubelet[3560]: I1104 23:54:05.596499 3560 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:54:05.597000 kubelet[3560]: I1104 23:54:05.596986 3560 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:54:05.598468 kubelet[3560]: W1104 23:54:05.598094 3560 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:54:05.601030 kubelet[3560]: I1104 23:54:05.600354 3560 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:54:05.601170 kubelet[3560]: I1104 23:54:05.601161 3560 server.go:1289] "Started kubelet" Nov 4 23:54:05.605687 kubelet[3560]: I1104 23:54:05.604876 3560 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:54:05.605758 kubelet[3560]: I1104 23:54:05.605692 3560 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:54:05.608397 kubelet[3560]: I1104 23:54:05.608338 3560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:54:05.609179 kubelet[3560]: I1104 23:54:05.608663 3560 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:54:05.609864 kubelet[3560]: I1104 23:54:05.609851 3560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:54:05.610424 kubelet[3560]: E1104 23:54:05.609043 3560 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-a41d436e4a.1874f2eb0f910090 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-a41d436e4a,UID:ci-4487.0.0-n-a41d436e4a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-a41d436e4a,},FirstTimestamp:2025-11-04 23:54:05.601104016 +0000 UTC m=+0.241975014,LastTimestamp:2025-11-04 23:54:05.601104016 +0000 UTC m=+0.241975014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-a41d436e4a,}" Nov 4 23:54:05.610594 kubelet[3560]: I1104 23:54:05.610579 3560 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:54:05.615084 kubelet[3560]: E1104 23:54:05.615062 3560 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:54:05.615195 kubelet[3560]: E1104 23:54:05.615181 3560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" Nov 4 23:54:05.615226 kubelet[3560]: I1104 23:54:05.615220 3560 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:54:05.615429 kubelet[3560]: I1104 23:54:05.615417 3560 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:54:05.615590 kubelet[3560]: I1104 23:54:05.615469 3560 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:54:05.616517 kubelet[3560]: I1104 23:54:05.616476 3560 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:54:05.616582 kubelet[3560]: I1104 23:54:05.616566 3560 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:54:05.616957 kubelet[3560]: E1104 23:54:05.616935 3560 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:54:05.617802 kubelet[3560]: I1104 23:54:05.617778 3560 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:54:05.621507 kubelet[3560]: E1104 23:54:05.621062 3560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-a41d436e4a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Nov 4 23:54:05.644150 kubelet[3560]: I1104 23:54:05.644134 3560 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:54:05.644150 kubelet[3560]: I1104 23:54:05.644148 3560 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:54:05.644241 kubelet[3560]: I1104 23:54:05.644162 3560 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:05.651285 kubelet[3560]: I1104 23:54:05.651270 3560 policy_none.go:49] "None policy: Start" Nov 4 23:54:05.651285 kubelet[3560]: I1104 23:54:05.651287 3560 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:54:05.651369 kubelet[3560]: I1104 23:54:05.651297 3560 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:54:05.659444 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:54:05.671359 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:54:05.674229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:54:05.682041 kubelet[3560]: E1104 23:54:05.682025 3560 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:54:05.682289 kubelet[3560]: I1104 23:54:05.682264 3560 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:54:05.682334 kubelet[3560]: I1104 23:54:05.682279 3560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:54:05.682502 kubelet[3560]: I1104 23:54:05.682444 3560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:54:05.684631 kubelet[3560]: E1104 23:54:05.684574 3560 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:54:05.684631 kubelet[3560]: E1104 23:54:05.684608 3560 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-a41d436e4a\" not found" Nov 4 23:54:05.728880 kubelet[3560]: I1104 23:54:05.728854 3560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:54:05.730959 kubelet[3560]: I1104 23:54:05.730902 3560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:54:05.730959 kubelet[3560]: I1104 23:54:05.730941 3560 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:54:05.730959 kubelet[3560]: I1104 23:54:05.730958 3560 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:54:05.731068 kubelet[3560]: I1104 23:54:05.730965 3560 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:54:05.731851 kubelet[3560]: E1104 23:54:05.731701 3560 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 4 23:54:05.731851 kubelet[3560]: E1104 23:54:05.731822 3560 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:54:05.783821 kubelet[3560]: I1104 23:54:05.783798 3560 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.784092 kubelet[3560]: E1104 23:54:05.784056 3560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.821824 kubelet[3560]: E1104 23:54:05.821782 3560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-a41d436e4a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Nov 4 23:54:05.853112 systemd[1]: Created slice kubepods-burstable-pod6d0e08398271ea85d5b7bfb8b4f92aaa.slice - libcontainer container kubepods-burstable-pod6d0e08398271ea85d5b7bfb8b4f92aaa.slice. Nov 4 23:54:05.859318 kubelet[3560]: E1104 23:54:05.859299 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.865516 systemd[1]: Created slice kubepods-burstable-podcfd909f0a4a9669c0b158d5bc2fc6133.slice - libcontainer container kubepods-burstable-podcfd909f0a4a9669c0b158d5bc2fc6133.slice. Nov 4 23:54:05.867197 kubelet[3560]: E1104 23:54:05.867041 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.884997 systemd[1]: Created slice kubepods-burstable-podc4522e450d717f67040520ce1ac0744e.slice - libcontainer container kubepods-burstable-podc4522e450d717f67040520ce1ac0744e.slice. Nov 4 23:54:05.889526 kubelet[3560]: E1104 23:54:05.888924 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917118 kubelet[3560]: I1104 23:54:05.917095 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917189 kubelet[3560]: I1104 23:54:05.917136 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4522e450d717f67040520ce1ac0744e-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-a41d436e4a\" (UID: \"c4522e450d717f67040520ce1ac0744e\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917189 kubelet[3560]: I1104 23:54:05.917163 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d0e08398271ea85d5b7bfb8b4f92aaa-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-a41d436e4a\" (UID: \"6d0e08398271ea85d5b7bfb8b4f92aaa\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917189 kubelet[3560]: I1104 23:54:05.917181 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d0e08398271ea85d5b7bfb8b4f92aaa-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-a41d436e4a\" (UID: \"6d0e08398271ea85d5b7bfb8b4f92aaa\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917266 kubelet[3560]: I1104 23:54:05.917198 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d0e08398271ea85d5b7bfb8b4f92aaa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-a41d436e4a\" (UID: \"6d0e08398271ea85d5b7bfb8b4f92aaa\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917266 kubelet[3560]: I1104 23:54:05.917219 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917266 kubelet[3560]: I1104 23:54:05.917236 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917266 kubelet[3560]: I1104 23:54:05.917259 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.917361 kubelet[3560]: I1104 23:54:05.917278 3560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.985586 kubelet[3560]: I1104 23:54:05.985566 3560 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:05.986091 kubelet[3560]: E1104 23:54:05.985896 3560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:06.161210 containerd[2550]: time="2025-11-04T23:54:06.160949691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-a41d436e4a,Uid:6d0e08398271ea85d5b7bfb8b4f92aaa,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:06.168471 containerd[2550]: time="2025-11-04T23:54:06.168435251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-a41d436e4a,Uid:cfd909f0a4a9669c0b158d5bc2fc6133,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:06.190258 containerd[2550]: time="2025-11-04T23:54:06.190231667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-a41d436e4a,Uid:c4522e450d717f67040520ce1ac0744e,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:06.223073 kubelet[3560]: E1104 23:54:06.223039 3560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-a41d436e4a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Nov 4 23:54:06.388125 kubelet[3560]: I1104 23:54:06.388085 3560 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:06.388431 kubelet[3560]: E1104 23:54:06.388407 3560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:06.597406 containerd[2550]: time="2025-11-04T23:54:06.597348569Z" level=info msg="connecting to shim b59801487341475e28f6821fc391c18fc5be7fb97a68e52a39d42b91d6d06586" address="unix:///run/containerd/s/4b13de2799da02e1121adbdbe7c7116d570c76e1ceae895b9c9b4fd7c818f54a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:06.609569 containerd[2550]: time="2025-11-04T23:54:06.609462493Z" level=info msg="connecting to shim 411d142561aa2aeeaf1804195c1f084d03ad0f2d2fdae8b4310c8bc0dec83ca7" address="unix:///run/containerd/s/17b56600aaa3fce0c008e48c4ef7742318c7f0bdbe14bbed7c6c5c12a17a5d2a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:06.615071 containerd[2550]: time="2025-11-04T23:54:06.614900077Z" level=info msg="connecting to shim e07d2f3e39e38fd66a3dbac133785a96a481318bcd0e227da3dbb924188129b8" address="unix:///run/containerd/s/b8a87094e25749e4db419fc4885522818ff931fa4d9b9be9b10a58d104e24b91" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:06.631638 systemd[1]: Started cri-containerd-b59801487341475e28f6821fc391c18fc5be7fb97a68e52a39d42b91d6d06586.scope - libcontainer container b59801487341475e28f6821fc391c18fc5be7fb97a68e52a39d42b91d6d06586. Nov 4 23:54:06.644644 systemd[1]: Started cri-containerd-411d142561aa2aeeaf1804195c1f084d03ad0f2d2fdae8b4310c8bc0dec83ca7.scope - libcontainer container 411d142561aa2aeeaf1804195c1f084d03ad0f2d2fdae8b4310c8bc0dec83ca7. Nov 4 23:54:06.647392 kubelet[3560]: E1104 23:54:06.647364 3560 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-a41d436e4a&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:54:06.652609 systemd[1]: Started cri-containerd-e07d2f3e39e38fd66a3dbac133785a96a481318bcd0e227da3dbb924188129b8.scope - libcontainer container e07d2f3e39e38fd66a3dbac133785a96a481318bcd0e227da3dbb924188129b8. Nov 4 23:54:06.725850 containerd[2550]: time="2025-11-04T23:54:06.725823569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-a41d436e4a,Uid:c4522e450d717f67040520ce1ac0744e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e07d2f3e39e38fd66a3dbac133785a96a481318bcd0e227da3dbb924188129b8\"" Nov 4 23:54:06.737668 containerd[2550]: time="2025-11-04T23:54:06.737645832Z" level=info msg="CreateContainer within sandbox \"e07d2f3e39e38fd66a3dbac133785a96a481318bcd0e227da3dbb924188129b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:54:06.738231 containerd[2550]: time="2025-11-04T23:54:06.738210426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-a41d436e4a,Uid:cfd909f0a4a9669c0b158d5bc2fc6133,Namespace:kube-system,Attempt:0,} returns sandbox id \"411d142561aa2aeeaf1804195c1f084d03ad0f2d2fdae8b4310c8bc0dec83ca7\"" Nov 4 23:54:06.740840 containerd[2550]: time="2025-11-04T23:54:06.740809130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-a41d436e4a,Uid:6d0e08398271ea85d5b7bfb8b4f92aaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59801487341475e28f6821fc391c18fc5be7fb97a68e52a39d42b91d6d06586\"" Nov 4 23:54:06.745648 containerd[2550]: time="2025-11-04T23:54:06.745619746Z" level=info msg="CreateContainer within sandbox \"411d142561aa2aeeaf1804195c1f084d03ad0f2d2fdae8b4310c8bc0dec83ca7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:54:06.750567 containerd[2550]: time="2025-11-04T23:54:06.750533532Z" level=info msg="CreateContainer within sandbox \"b59801487341475e28f6821fc391c18fc5be7fb97a68e52a39d42b91d6d06586\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:54:06.773805 containerd[2550]: time="2025-11-04T23:54:06.773779684Z" level=info msg="Container e0d454a6a9366894fc1be0d0eac8a87f36a04bb42e25e3bf081b64f44c4f054b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:06.782541 containerd[2550]: time="2025-11-04T23:54:06.782455025Z" level=info msg="Container 2c5348ae40cebfb9fe638319d13daac075890cb0501ca7ba2c12551fde7b5fbb: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:06.797276 kubelet[3560]: E1104 23:54:06.797229 3560 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:54:06.811792 containerd[2550]: time="2025-11-04T23:54:06.811764318Z" level=info msg="Container 747c437f1273a0d640e6929a853dfd1a0a2d8d15fe8a3976ebeec7fec1884175: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:06.820116 containerd[2550]: time="2025-11-04T23:54:06.820076358Z" level=info msg="CreateContainer within sandbox \"e07d2f3e39e38fd66a3dbac133785a96a481318bcd0e227da3dbb924188129b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0d454a6a9366894fc1be0d0eac8a87f36a04bb42e25e3bf081b64f44c4f054b\"" Nov 4 23:54:06.820779 containerd[2550]: time="2025-11-04T23:54:06.820751476Z" level=info msg="StartContainer for \"e0d454a6a9366894fc1be0d0eac8a87f36a04bb42e25e3bf081b64f44c4f054b\"" Nov 4 23:54:06.822393 containerd[2550]: time="2025-11-04T23:54:06.821756079Z" level=info msg="connecting to shim e0d454a6a9366894fc1be0d0eac8a87f36a04bb42e25e3bf081b64f44c4f054b" address="unix:///run/containerd/s/b8a87094e25749e4db419fc4885522818ff931fa4d9b9be9b10a58d104e24b91" protocol=ttrpc version=3 Nov 4 23:54:06.823918 kubelet[3560]: E1104 23:54:06.823881 3560 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:54:06.830324 containerd[2550]: time="2025-11-04T23:54:06.830291998Z" level=info msg="CreateContainer within sandbox \"411d142561aa2aeeaf1804195c1f084d03ad0f2d2fdae8b4310c8bc0dec83ca7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c5348ae40cebfb9fe638319d13daac075890cb0501ca7ba2c12551fde7b5fbb\"" Nov 4 23:54:06.830784 containerd[2550]: time="2025-11-04T23:54:06.830762550Z" level=info msg="StartContainer for \"2c5348ae40cebfb9fe638319d13daac075890cb0501ca7ba2c12551fde7b5fbb\"" Nov 4 23:54:06.832853 containerd[2550]: time="2025-11-04T23:54:06.831717740Z" level=info msg="connecting to shim 2c5348ae40cebfb9fe638319d13daac075890cb0501ca7ba2c12551fde7b5fbb" address="unix:///run/containerd/s/17b56600aaa3fce0c008e48c4ef7742318c7f0bdbe14bbed7c6c5c12a17a5d2a" protocol=ttrpc version=3 Nov 4 23:54:06.837426 containerd[2550]: time="2025-11-04T23:54:06.837219668Z" level=info msg="CreateContainer within sandbox \"b59801487341475e28f6821fc391c18fc5be7fb97a68e52a39d42b91d6d06586\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"747c437f1273a0d640e6929a853dfd1a0a2d8d15fe8a3976ebeec7fec1884175\"" Nov 4 23:54:06.837855 containerd[2550]: time="2025-11-04T23:54:06.837835944Z" level=info msg="StartContainer for \"747c437f1273a0d640e6929a853dfd1a0a2d8d15fe8a3976ebeec7fec1884175\"" Nov 4 23:54:06.841513 containerd[2550]: time="2025-11-04T23:54:06.841080079Z" level=info msg="connecting to shim 747c437f1273a0d640e6929a853dfd1a0a2d8d15fe8a3976ebeec7fec1884175" address="unix:///run/containerd/s/4b13de2799da02e1121adbdbe7c7116d570c76e1ceae895b9c9b4fd7c818f54a" protocol=ttrpc version=3 Nov 4 23:54:06.848179 systemd[1]: Started cri-containerd-e0d454a6a9366894fc1be0d0eac8a87f36a04bb42e25e3bf081b64f44c4f054b.scope - libcontainer container e0d454a6a9366894fc1be0d0eac8a87f36a04bb42e25e3bf081b64f44c4f054b. Nov 4 23:54:06.859695 systemd[1]: Started cri-containerd-2c5348ae40cebfb9fe638319d13daac075890cb0501ca7ba2c12551fde7b5fbb.scope - libcontainer container 2c5348ae40cebfb9fe638319d13daac075890cb0501ca7ba2c12551fde7b5fbb. Nov 4 23:54:06.877631 systemd[1]: Started cri-containerd-747c437f1273a0d640e6929a853dfd1a0a2d8d15fe8a3976ebeec7fec1884175.scope - libcontainer container 747c437f1273a0d640e6929a853dfd1a0a2d8d15fe8a3976ebeec7fec1884175. Nov 4 23:54:06.955254 containerd[2550]: time="2025-11-04T23:54:06.955224779Z" level=info msg="StartContainer for \"2c5348ae40cebfb9fe638319d13daac075890cb0501ca7ba2c12551fde7b5fbb\" returns successfully" Nov 4 23:54:06.955404 containerd[2550]: time="2025-11-04T23:54:06.955385416Z" level=info msg="StartContainer for \"e0d454a6a9366894fc1be0d0eac8a87f36a04bb42e25e3bf081b64f44c4f054b\" returns successfully" Nov 4 23:54:06.971762 containerd[2550]: time="2025-11-04T23:54:06.971735270Z" level=info msg="StartContainer for \"747c437f1273a0d640e6929a853dfd1a0a2d8d15fe8a3976ebeec7fec1884175\" returns successfully" Nov 4 23:54:07.023954 kubelet[3560]: E1104 23:54:07.023917 3560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-a41d436e4a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Nov 4 23:54:07.190716 kubelet[3560]: I1104 23:54:07.190654 3560 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:07.740510 kubelet[3560]: E1104 23:54:07.740464 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:07.745396 kubelet[3560]: E1104 23:54:07.745376 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:07.746912 kubelet[3560]: E1104 23:54:07.746893 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:08.751253 kubelet[3560]: E1104 23:54:08.751085 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:08.751864 kubelet[3560]: E1104 23:54:08.751852 3560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:08.834650 kubelet[3560]: E1104 23:54:08.834613 3560 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-n-a41d436e4a\" not found" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:08.999233 kubelet[3560]: I1104 23:54:08.999193 3560 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.018912 kubelet[3560]: I1104 23:54:09.018816 3560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.029175 kubelet[3560]: E1104 23:54:09.029148 3560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-a41d436e4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.029175 kubelet[3560]: I1104 23:54:09.029170 3560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.030338 kubelet[3560]: E1104 23:54:09.030314 3560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.030338 kubelet[3560]: I1104 23:54:09.030335 3560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.031410 kubelet[3560]: E1104 23:54:09.031388 3560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-n-a41d436e4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.387659 update_engine[2524]: I20251104 23:54:09.387279 2524 update_attempter.cc:509] Updating boot flags... Nov 4 23:54:09.596599 kubelet[3560]: I1104 23:54:09.596563 3560 apiserver.go:52] "Watching apiserver" Nov 4 23:54:09.616494 kubelet[3560]: I1104 23:54:09.616035 3560 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:54:09.749033 kubelet[3560]: I1104 23:54:09.748997 3560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:09.751884 kubelet[3560]: E1104 23:54:09.751849 3560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-n-a41d436e4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:11.312754 systemd[1]: Reload requested from client PID 3876 ('systemctl') (unit session-9.scope)... Nov 4 23:54:11.312771 systemd[1]: Reloading... Nov 4 23:54:11.408518 zram_generator::config[3930]: No configuration found. Nov 4 23:54:11.643905 systemd[1]: Reloading finished in 330 ms. Nov 4 23:54:11.680991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:11.698403 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:54:11.698674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:11.698731 systemd[1]: kubelet.service: Consumed 586ms CPU time, 130.1M memory peak. Nov 4 23:54:11.700371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:13.081851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:13.089713 (kubelet)[3991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:54:13.135850 kubelet[3991]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:13.135850 kubelet[3991]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:54:13.135850 kubelet[3991]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:13.135850 kubelet[3991]: I1104 23:54:13.135302 3991 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:54:13.141313 kubelet[3991]: I1104 23:54:13.141290 3991 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:54:13.141313 kubelet[3991]: I1104 23:54:13.141307 3991 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:54:13.141503 kubelet[3991]: I1104 23:54:13.141494 3991 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:54:13.142298 kubelet[3991]: I1104 23:54:13.142282 3991 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:54:13.144524 kubelet[3991]: I1104 23:54:13.144464 3991 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:54:13.148421 kubelet[3991]: I1104 23:54:13.148399 3991 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:54:13.150068 kubelet[3991]: I1104 23:54:13.150052 3991 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:54:13.150257 kubelet[3991]: I1104 23:54:13.150237 3991 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:54:13.150382 kubelet[3991]: I1104 23:54:13.150256 3991 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-a41d436e4a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:54:13.150510 kubelet[3991]: I1104 23:54:13.150386 3991 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:54:13.150510 kubelet[3991]: I1104 23:54:13.150395 3991 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:54:13.150510 kubelet[3991]: I1104 23:54:13.150435 3991 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:13.150588 kubelet[3991]: I1104 23:54:13.150574 3991 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:54:13.150588 kubelet[3991]: I1104 23:54:13.150583 3991 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:54:13.150630 kubelet[3991]: I1104 23:54:13.150602 3991 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:54:13.150630 kubelet[3991]: I1104 23:54:13.150615 3991 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:54:13.156501 kubelet[3991]: I1104 23:54:13.156448 3991 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:54:13.158088 kubelet[3991]: I1104 23:54:13.158070 3991 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:54:13.164182 kubelet[3991]: I1104 23:54:13.164171 3991 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:54:13.164284 kubelet[3991]: I1104 23:54:13.164279 3991 server.go:1289] "Started kubelet" Nov 4 23:54:13.167932 kubelet[3991]: I1104 23:54:13.167918 3991 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:54:13.181462 kubelet[3991]: I1104 23:54:13.181438 3991 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:54:13.187102 kubelet[3991]: I1104 23:54:13.186793 3991 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:54:13.190040 kubelet[3991]: I1104 23:54:13.188773 3991 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:54:13.190040 kubelet[3991]: I1104 23:54:13.189606 3991 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:54:13.190815 kubelet[3991]: I1104 23:54:13.190769 3991 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:54:13.190944 kubelet[3991]: I1104 23:54:13.181573 3991 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:54:13.193142 kubelet[3991]: I1104 23:54:13.192122 3991 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:54:13.194072 kubelet[3991]: I1104 23:54:13.194056 3991 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:54:13.197626 kubelet[3991]: I1104 23:54:13.197606 3991 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:54:13.198902 kubelet[3991]: I1104 23:54:13.198541 3991 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:54:13.198902 kubelet[3991]: E1104 23:54:13.197858 3991 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:54:13.198902 kubelet[3991]: I1104 23:54:13.198769 3991 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:54:13.204239 kubelet[3991]: I1104 23:54:13.204218 3991 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:54:13.205920 kubelet[3991]: I1104 23:54:13.205904 3991 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:54:13.205995 kubelet[3991]: I1104 23:54:13.205989 3991 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:54:13.206059 kubelet[3991]: I1104 23:54:13.206053 3991 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:54:13.206093 kubelet[3991]: I1104 23:54:13.206089 3991 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:54:13.206153 kubelet[3991]: E1104 23:54:13.206141 3991 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:54:13.229255 sudo[4027]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 23:54:13.230069 sudo[4027]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 23:54:13.246642 kubelet[3991]: I1104 23:54:13.246628 3991 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:54:13.246748 kubelet[3991]: I1104 23:54:13.246718 3991 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:54:13.246748 kubelet[3991]: I1104 23:54:13.246741 3991 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:13.247513 kubelet[3991]: I1104 23:54:13.246895 3991 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:54:13.247598 kubelet[3991]: I1104 23:54:13.247576 3991 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:54:13.248283 kubelet[3991]: I1104 23:54:13.247633 3991 policy_none.go:49] "None policy: Start" Nov 4 23:54:13.248283 kubelet[3991]: I1104 23:54:13.247644 3991 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:54:13.248283 kubelet[3991]: I1104 23:54:13.247655 3991 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:54:13.248283 kubelet[3991]: I1104 23:54:13.247751 3991 state_mem.go:75] "Updated machine memory state" Nov 4 23:54:13.251462 kubelet[3991]: E1104 23:54:13.251447 3991 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:54:13.252079 kubelet[3991]: I1104 23:54:13.252068 3991 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:54:13.252757 kubelet[3991]: I1104 23:54:13.252729 3991 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:54:13.253211 kubelet[3991]: I1104 23:54:13.253202 3991 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:54:13.256568 kubelet[3991]: E1104 23:54:13.256554 3991 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:54:13.307576 kubelet[3991]: I1104 23:54:13.307449 3991 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.308057 kubelet[3991]: I1104 23:54:13.307825 3991 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.308360 kubelet[3991]: I1104 23:54:13.307954 3991 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.319168 kubelet[3991]: I1104 23:54:13.319152 3991 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:13.324182 kubelet[3991]: I1104 23:54:13.324092 3991 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:13.324789 kubelet[3991]: I1104 23:54:13.324703 3991 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:13.365362 kubelet[3991]: I1104 23:54:13.365116 3991 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.379007 kubelet[3991]: I1104 23:54:13.378976 3991 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.379076 kubelet[3991]: I1104 23:54:13.379029 3991 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394418 kubelet[3991]: I1104 23:54:13.394394 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d0e08398271ea85d5b7bfb8b4f92aaa-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-a41d436e4a\" (UID: \"6d0e08398271ea85d5b7bfb8b4f92aaa\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394501 kubelet[3991]: I1104 23:54:13.394429 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394501 kubelet[3991]: I1104 23:54:13.394448 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394501 kubelet[3991]: I1104 23:54:13.394467 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394585 kubelet[3991]: I1104 23:54:13.394500 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d0e08398271ea85d5b7bfb8b4f92aaa-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-a41d436e4a\" (UID: \"6d0e08398271ea85d5b7bfb8b4f92aaa\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394585 kubelet[3991]: I1104 23:54:13.394521 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d0e08398271ea85d5b7bfb8b4f92aaa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-a41d436e4a\" (UID: \"6d0e08398271ea85d5b7bfb8b4f92aaa\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394585 kubelet[3991]: I1104 23:54:13.394541 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394585 kubelet[3991]: I1104 23:54:13.394559 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfd909f0a4a9669c0b158d5bc2fc6133-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-a41d436e4a\" (UID: \"cfd909f0a4a9669c0b158d5bc2fc6133\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.394585 kubelet[3991]: I1104 23:54:13.394577 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4522e450d717f67040520ce1ac0744e-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-a41d436e4a\" (UID: \"c4522e450d717f67040520ce1ac0744e\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" Nov 4 23:54:13.595896 sudo[4027]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:14.151017 kubelet[3991]: I1104 23:54:14.150977 3991 apiserver.go:52] "Watching apiserver" Nov 4 23:54:14.191351 kubelet[3991]: I1104 23:54:14.191314 3991 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:54:14.241316 kubelet[3991]: I1104 23:54:14.241233 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-n-a41d436e4a" podStartSLOduration=1.24121914 podStartE2EDuration="1.24121914s" podCreationTimestamp="2025-11-04 23:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:14.241040697 +0000 UTC m=+1.148130768" watchObservedRunningTime="2025-11-04 23:54:14.24121914 +0000 UTC m=+1.148309207" Nov 4 23:54:14.274192 kubelet[3991]: I1104 23:54:14.274141 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-n-a41d436e4a" podStartSLOduration=1.274126619 podStartE2EDuration="1.274126619s" podCreationTimestamp="2025-11-04 23:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:14.250524408 +0000 UTC m=+1.157614478" watchObservedRunningTime="2025-11-04 23:54:14.274126619 +0000 UTC m=+1.181216685" Nov 4 23:54:14.338496 kubelet[3991]: I1104 23:54:14.338416 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-a41d436e4a" podStartSLOduration=1.338404804 podStartE2EDuration="1.338404804s" podCreationTimestamp="2025-11-04 23:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:14.274738887 +0000 UTC m=+1.181828954" watchObservedRunningTime="2025-11-04 23:54:14.338404804 +0000 UTC m=+1.245494870" Nov 4 23:54:14.884669 sudo[2977]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:14.985392 sshd[2976]: Connection closed by 10.200.16.10 port 34080 Nov 4 23:54:14.986642 sshd-session[2973]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:14.989993 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:34080.service: Deactivated successfully. Nov 4 23:54:14.992197 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:54:14.992379 systemd[1]: session-9.scope: Consumed 3.410s CPU time, 272.8M memory peak. Nov 4 23:54:14.993985 systemd-logind[2523]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:54:14.995107 systemd-logind[2523]: Removed session 9. Nov 4 23:54:16.258885 kubelet[3991]: I1104 23:54:16.258855 3991 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:54:16.259330 containerd[2550]: time="2025-11-04T23:54:16.259285068Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:54:16.259611 kubelet[3991]: I1104 23:54:16.259459 3991 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:54:16.737801 systemd[1]: Created slice kubepods-besteffort-pod7466ec1f_be39_499a_8c8b_e9d3c548d842.slice - libcontainer container kubepods-besteffort-pod7466ec1f_be39_499a_8c8b_e9d3c548d842.slice. Nov 4 23:54:16.818743 kubelet[3991]: I1104 23:54:16.818720 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7466ec1f-be39-499a-8c8b-e9d3c548d842-xtables-lock\") pod \"kube-proxy-krrpz\" (UID: \"7466ec1f-be39-499a-8c8b-e9d3c548d842\") " pod="kube-system/kube-proxy-krrpz" Nov 4 23:54:16.818955 kubelet[3991]: I1104 23:54:16.818942 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7466ec1f-be39-499a-8c8b-e9d3c548d842-lib-modules\") pod \"kube-proxy-krrpz\" (UID: \"7466ec1f-be39-499a-8c8b-e9d3c548d842\") " pod="kube-system/kube-proxy-krrpz" Nov 4 23:54:16.819084 kubelet[3991]: I1104 23:54:16.819011 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g67d\" (UniqueName: \"kubernetes.io/projected/7466ec1f-be39-499a-8c8b-e9d3c548d842-kube-api-access-9g67d\") pod \"kube-proxy-krrpz\" (UID: \"7466ec1f-be39-499a-8c8b-e9d3c548d842\") " pod="kube-system/kube-proxy-krrpz" Nov 4 23:54:16.819084 kubelet[3991]: I1104 23:54:16.819036 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7466ec1f-be39-499a-8c8b-e9d3c548d842-kube-proxy\") pod \"kube-proxy-krrpz\" (UID: \"7466ec1f-be39-499a-8c8b-e9d3c548d842\") " pod="kube-system/kube-proxy-krrpz" Nov 4 23:54:16.822733 systemd[1]: Created slice kubepods-burstable-pode0eaacd8_2398_4038_9439_9c6d8004e526.slice - libcontainer container kubepods-burstable-pode0eaacd8_2398_4038_9439_9c6d8004e526.slice. Nov 4 23:54:16.921505 kubelet[3991]: I1104 23:54:16.920046 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-lib-modules\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921505 kubelet[3991]: I1104 23:54:16.920105 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0eaacd8-2398-4038-9439-9c6d8004e526-clustermesh-secrets\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921505 kubelet[3991]: I1104 23:54:16.920130 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-config-path\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921505 kubelet[3991]: I1104 23:54:16.920147 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-hubble-tls\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921505 kubelet[3991]: I1104 23:54:16.920174 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-bpf-maps\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921505 kubelet[3991]: I1104 23:54:16.920191 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-xtables-lock\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921801 kubelet[3991]: I1104 23:54:16.920209 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-run\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921801 kubelet[3991]: I1104 23:54:16.920226 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-cgroup\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921801 kubelet[3991]: I1104 23:54:16.920244 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cni-path\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921801 kubelet[3991]: I1104 23:54:16.920263 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-net\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921801 kubelet[3991]: I1104 23:54:16.920281 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-kernel\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921801 kubelet[3991]: I1104 23:54:16.920299 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbq2\" (UniqueName: \"kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-kube-api-access-wvbq2\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921958 kubelet[3991]: I1104 23:54:16.920338 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-hostproc\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.921958 kubelet[3991]: I1104 23:54:16.920357 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-etc-cni-netd\") pod \"cilium-bc9pk\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " pod="kube-system/cilium-bc9pk" Nov 4 23:54:16.924071 kubelet[3991]: E1104 23:54:16.924042 3991 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 23:54:16.924071 kubelet[3991]: E1104 23:54:16.924067 3991 projected.go:194] Error preparing data for projected volume kube-api-access-9g67d for pod kube-system/kube-proxy-krrpz: configmap "kube-root-ca.crt" not found Nov 4 23:54:16.924178 kubelet[3991]: E1104 23:54:16.924137 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7466ec1f-be39-499a-8c8b-e9d3c548d842-kube-api-access-9g67d podName:7466ec1f-be39-499a-8c8b-e9d3c548d842 nodeName:}" failed. No retries permitted until 2025-11-04 23:54:17.424115575 +0000 UTC m=+4.331205644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9g67d" (UniqueName: "kubernetes.io/projected/7466ec1f-be39-499a-8c8b-e9d3c548d842-kube-api-access-9g67d") pod "kube-proxy-krrpz" (UID: "7466ec1f-be39-499a-8c8b-e9d3c548d842") : configmap "kube-root-ca.crt" not found Nov 4 23:54:17.039690 kubelet[3991]: E1104 23:54:17.039594 3991 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 23:54:17.039690 kubelet[3991]: E1104 23:54:17.039621 3991 projected.go:194] Error preparing data for projected volume kube-api-access-wvbq2 for pod kube-system/cilium-bc9pk: configmap "kube-root-ca.crt" not found Nov 4 23:54:17.040007 kubelet[3991]: E1104 23:54:17.039760 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-kube-api-access-wvbq2 podName:e0eaacd8-2398-4038-9439-9c6d8004e526 nodeName:}" failed. No retries permitted until 2025-11-04 23:54:17.539742404 +0000 UTC m=+4.446832475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wvbq2" (UniqueName: "kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-kube-api-access-wvbq2") pod "cilium-bc9pk" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526") : configmap "kube-root-ca.crt" not found Nov 4 23:54:17.450523 systemd[1]: Created slice kubepods-besteffort-podfaebeebb_fb31_4cba_91fc_b1b742dc59cb.slice - libcontainer container kubepods-besteffort-podfaebeebb_fb31_4cba_91fc_b1b742dc59cb.slice. Nov 4 23:54:17.524877 kubelet[3991]: I1104 23:54:17.524823 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/faebeebb-fb31-4cba-91fc-b1b742dc59cb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vdlgk\" (UID: \"faebeebb-fb31-4cba-91fc-b1b742dc59cb\") " pod="kube-system/cilium-operator-6c4d7847fc-vdlgk" Nov 4 23:54:17.525206 kubelet[3991]: I1104 23:54:17.524889 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qnk\" (UniqueName: \"kubernetes.io/projected/faebeebb-fb31-4cba-91fc-b1b742dc59cb-kube-api-access-96qnk\") pod \"cilium-operator-6c4d7847fc-vdlgk\" (UID: \"faebeebb-fb31-4cba-91fc-b1b742dc59cb\") " pod="kube-system/cilium-operator-6c4d7847fc-vdlgk" Nov 4 23:54:17.646707 containerd[2550]: time="2025-11-04T23:54:17.646668511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-krrpz,Uid:7466ec1f-be39-499a-8c8b-e9d3c548d842,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:17.698273 containerd[2550]: time="2025-11-04T23:54:17.698215621Z" level=info msg="connecting to shim a24123c4d1cd335236389c940f40e12c331e7faf503e9287e452ccba0570929a" address="unix:///run/containerd/s/5b65734584e653640b42cb1e26ce209394581220b7f1d31801de8d839336e28a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:17.716672 systemd[1]: Started cri-containerd-a24123c4d1cd335236389c940f40e12c331e7faf503e9287e452ccba0570929a.scope - libcontainer container a24123c4d1cd335236389c940f40e12c331e7faf503e9287e452ccba0570929a. Nov 4 23:54:17.726470 containerd[2550]: time="2025-11-04T23:54:17.726444041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bc9pk,Uid:e0eaacd8-2398-4038-9439-9c6d8004e526,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:17.744281 containerd[2550]: time="2025-11-04T23:54:17.744242547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-krrpz,Uid:7466ec1f-be39-499a-8c8b-e9d3c548d842,Namespace:kube-system,Attempt:0,} returns sandbox id \"a24123c4d1cd335236389c940f40e12c331e7faf503e9287e452ccba0570929a\"" Nov 4 23:54:17.752804 containerd[2550]: time="2025-11-04T23:54:17.752453978Z" level=info msg="CreateContainer within sandbox \"a24123c4d1cd335236389c940f40e12c331e7faf503e9287e452ccba0570929a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:54:17.755098 containerd[2550]: time="2025-11-04T23:54:17.755070009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vdlgk,Uid:faebeebb-fb31-4cba-91fc-b1b742dc59cb,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:17.780365 containerd[2550]: time="2025-11-04T23:54:17.780340722Z" level=info msg="Container e9c3012203859a4284686a5fbd4603f670f69245856db175d1b86f2336f02251: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:17.811381 containerd[2550]: time="2025-11-04T23:54:17.811350114Z" level=info msg="connecting to shim 1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f" address="unix:///run/containerd/s/3b4acdade414f9d97609c73e8b64a5bd15c0d14fc977c70a761eb7bed6368fdd" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:17.816020 containerd[2550]: time="2025-11-04T23:54:17.815990743Z" level=info msg="CreateContainer within sandbox \"a24123c4d1cd335236389c940f40e12c331e7faf503e9287e452ccba0570929a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9c3012203859a4284686a5fbd4603f670f69245856db175d1b86f2336f02251\"" Nov 4 23:54:17.818261 containerd[2550]: time="2025-11-04T23:54:17.817610981Z" level=info msg="StartContainer for \"e9c3012203859a4284686a5fbd4603f670f69245856db175d1b86f2336f02251\"" Nov 4 23:54:17.821094 containerd[2550]: time="2025-11-04T23:54:17.820565449Z" level=info msg="connecting to shim e9c3012203859a4284686a5fbd4603f670f69245856db175d1b86f2336f02251" address="unix:///run/containerd/s/5b65734584e653640b42cb1e26ce209394581220b7f1d31801de8d839336e28a" protocol=ttrpc version=3 Nov 4 23:54:17.830355 containerd[2550]: time="2025-11-04T23:54:17.830125309Z" level=info msg="connecting to shim efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa" address="unix:///run/containerd/s/ee3dac41f8f8601ccde990e1ad1b666583163cade81b5e9cc7e1608cfe30b877" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:17.841670 systemd[1]: Started cri-containerd-1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f.scope - libcontainer container 1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f. Nov 4 23:54:17.853642 systemd[1]: Started cri-containerd-e9c3012203859a4284686a5fbd4603f670f69245856db175d1b86f2336f02251.scope - libcontainer container e9c3012203859a4284686a5fbd4603f670f69245856db175d1b86f2336f02251. Nov 4 23:54:17.858413 systemd[1]: Started cri-containerd-efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa.scope - libcontainer container efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa. Nov 4 23:54:17.915342 containerd[2550]: time="2025-11-04T23:54:17.915284687Z" level=info msg="StartContainer for \"e9c3012203859a4284686a5fbd4603f670f69245856db175d1b86f2336f02251\" returns successfully" Nov 4 23:54:17.916343 containerd[2550]: time="2025-11-04T23:54:17.916315911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bc9pk,Uid:e0eaacd8-2398-4038-9439-9c6d8004e526,Namespace:kube-system,Attempt:0,} returns sandbox id \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\"" Nov 4 23:54:17.919442 containerd[2550]: time="2025-11-04T23:54:17.919418707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 23:54:17.998430 containerd[2550]: time="2025-11-04T23:54:17.998354603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vdlgk,Uid:faebeebb-fb31-4cba-91fc-b1b742dc59cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\"" Nov 4 23:54:18.245279 kubelet[3991]: I1104 23:54:18.245154 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-krrpz" podStartSLOduration=2.245136918 podStartE2EDuration="2.245136918s" podCreationTimestamp="2025-11-04 23:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:18.24501162 +0000 UTC m=+5.152101687" watchObservedRunningTime="2025-11-04 23:54:18.245136918 +0000 UTC m=+5.152226988" Nov 4 23:54:24.149717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939622484.mount: Deactivated successfully. Nov 4 23:54:25.750664 containerd[2550]: time="2025-11-04T23:54:25.750622841Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:25.753182 containerd[2550]: time="2025-11-04T23:54:25.753074586Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 4 23:54:25.756025 containerd[2550]: time="2025-11-04T23:54:25.755999549Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:25.757156 containerd[2550]: time="2025-11-04T23:54:25.757130437Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.837678855s" Nov 4 23:54:25.757238 containerd[2550]: time="2025-11-04T23:54:25.757225925Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 4 23:54:25.758815 containerd[2550]: time="2025-11-04T23:54:25.758777045Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 23:54:25.765641 containerd[2550]: time="2025-11-04T23:54:25.765615416Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:54:25.794281 containerd[2550]: time="2025-11-04T23:54:25.794256897Z" level=info msg="Container 2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:25.807605 containerd[2550]: time="2025-11-04T23:54:25.807578243Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\"" Nov 4 23:54:25.807992 containerd[2550]: time="2025-11-04T23:54:25.807956707Z" level=info msg="StartContainer for \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\"" Nov 4 23:54:25.808882 containerd[2550]: time="2025-11-04T23:54:25.808851112Z" level=info msg="connecting to shim 2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10" address="unix:///run/containerd/s/3b4acdade414f9d97609c73e8b64a5bd15c0d14fc977c70a761eb7bed6368fdd" protocol=ttrpc version=3 Nov 4 23:54:25.832636 systemd[1]: Started cri-containerd-2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10.scope - libcontainer container 2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10. Nov 4 23:54:25.861500 containerd[2550]: time="2025-11-04T23:54:25.861417005Z" level=info msg="StartContainer for \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" returns successfully" Nov 4 23:54:25.867930 systemd[1]: cri-containerd-2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10.scope: Deactivated successfully. Nov 4 23:54:25.871823 containerd[2550]: time="2025-11-04T23:54:25.871413176Z" level=info msg="received exit event container_id:\"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" id:\"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" pid:4415 exited_at:{seconds:1762300465 nanos:865814636}" Nov 4 23:54:25.872310 containerd[2550]: time="2025-11-04T23:54:25.872292648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" id:\"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" pid:4415 exited_at:{seconds:1762300465 nanos:865814636}" Nov 4 23:54:26.793114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10-rootfs.mount: Deactivated successfully. Nov 4 23:54:30.081707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285593547.mount: Deactivated successfully. Nov 4 23:54:30.271505 containerd[2550]: time="2025-11-04T23:54:30.269960653Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:54:30.351549 containerd[2550]: time="2025-11-04T23:54:30.350755975Z" level=info msg="Container 9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:30.368785 containerd[2550]: time="2025-11-04T23:54:30.368752370Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\"" Nov 4 23:54:30.370415 containerd[2550]: time="2025-11-04T23:54:30.370383581Z" level=info msg="StartContainer for \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\"" Nov 4 23:54:30.371200 containerd[2550]: time="2025-11-04T23:54:30.371168687Z" level=info msg="connecting to shim 9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd" address="unix:///run/containerd/s/3b4acdade414f9d97609c73e8b64a5bd15c0d14fc977c70a761eb7bed6368fdd" protocol=ttrpc version=3 Nov 4 23:54:30.399643 systemd[1]: Started cri-containerd-9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd.scope - libcontainer container 9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd. Nov 4 23:54:30.445747 containerd[2550]: time="2025-11-04T23:54:30.444303984Z" level=info msg="StartContainer for \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" returns successfully" Nov 4 23:54:30.454683 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:54:30.454938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:30.455002 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:30.457782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:30.458079 systemd[1]: cri-containerd-9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd.scope: Deactivated successfully. Nov 4 23:54:30.462560 containerd[2550]: time="2025-11-04T23:54:30.462118683Z" level=info msg="received exit event container_id:\"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" id:\"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" pid:4470 exited_at:{seconds:1762300470 nanos:461799993}" Nov 4 23:54:30.463103 containerd[2550]: time="2025-11-04T23:54:30.463079911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" id:\"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" pid:4470 exited_at:{seconds:1762300470 nanos:461799993}" Nov 4 23:54:30.484362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:30.816012 containerd[2550]: time="2025-11-04T23:54:30.815969950Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:30.820846 containerd[2550]: time="2025-11-04T23:54:30.820807468Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 4 23:54:30.826344 containerd[2550]: time="2025-11-04T23:54:30.826303311Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:30.827495 containerd[2550]: time="2025-11-04T23:54:30.827271943Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.068463503s" Nov 4 23:54:30.827495 containerd[2550]: time="2025-11-04T23:54:30.827305215Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 4 23:54:30.839130 containerd[2550]: time="2025-11-04T23:54:30.839104963Z" level=info msg="CreateContainer within sandbox \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 23:54:30.854734 containerd[2550]: time="2025-11-04T23:54:30.854710442Z" level=info msg="Container 8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:30.866638 containerd[2550]: time="2025-11-04T23:54:30.866613411Z" level=info msg="CreateContainer within sandbox \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\"" Nov 4 23:54:30.866947 containerd[2550]: time="2025-11-04T23:54:30.866918741Z" level=info msg="StartContainer for \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\"" Nov 4 23:54:30.867707 containerd[2550]: time="2025-11-04T23:54:30.867684670Z" level=info msg="connecting to shim 8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5" address="unix:///run/containerd/s/ee3dac41f8f8601ccde990e1ad1b666583163cade81b5e9cc7e1608cfe30b877" protocol=ttrpc version=3 Nov 4 23:54:30.887665 systemd[1]: Started cri-containerd-8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5.scope - libcontainer container 8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5. Nov 4 23:54:30.915699 containerd[2550]: time="2025-11-04T23:54:30.915676762Z" level=info msg="StartContainer for \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" returns successfully" Nov 4 23:54:31.074315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd-rootfs.mount: Deactivated successfully. Nov 4 23:54:31.271980 containerd[2550]: time="2025-11-04T23:54:31.271942517Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:54:31.300393 containerd[2550]: time="2025-11-04T23:54:31.297598284Z" level=info msg="Container bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:31.313217 kubelet[3991]: I1104 23:54:31.313168 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vdlgk" podStartSLOduration=1.4859025319999999 podStartE2EDuration="14.313150388s" podCreationTimestamp="2025-11-04 23:54:17 +0000 UTC" firstStartedPulling="2025-11-04 23:54:18.000742922 +0000 UTC m=+4.907832989" lastFinishedPulling="2025-11-04 23:54:30.827990777 +0000 UTC m=+17.735080845" observedRunningTime="2025-11-04 23:54:31.288730114 +0000 UTC m=+18.195820179" watchObservedRunningTime="2025-11-04 23:54:31.313150388 +0000 UTC m=+18.220240457" Nov 4 23:54:31.320216 containerd[2550]: time="2025-11-04T23:54:31.319359444Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\"" Nov 4 23:54:31.322652 containerd[2550]: time="2025-11-04T23:54:31.322630498Z" level=info msg="StartContainer for \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\"" Nov 4 23:54:31.325921 containerd[2550]: time="2025-11-04T23:54:31.325827161Z" level=info msg="connecting to shim bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff" address="unix:///run/containerd/s/3b4acdade414f9d97609c73e8b64a5bd15c0d14fc977c70a761eb7bed6368fdd" protocol=ttrpc version=3 Nov 4 23:54:31.352618 systemd[1]: Started cri-containerd-bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff.scope - libcontainer container bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff. Nov 4 23:54:31.437905 containerd[2550]: time="2025-11-04T23:54:31.437779399Z" level=info msg="StartContainer for \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" returns successfully" Nov 4 23:54:31.443736 systemd[1]: cri-containerd-bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff.scope: Deactivated successfully. Nov 4 23:54:31.445811 containerd[2550]: time="2025-11-04T23:54:31.445646943Z" level=info msg="received exit event container_id:\"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" id:\"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" pid:4552 exited_at:{seconds:1762300471 nanos:445301009}" Nov 4 23:54:31.445964 containerd[2550]: time="2025-11-04T23:54:31.445937614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" id:\"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" pid:4552 exited_at:{seconds:1762300471 nanos:445301009}" Nov 4 23:54:31.466964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff-rootfs.mount: Deactivated successfully. Nov 4 23:54:32.275584 containerd[2550]: time="2025-11-04T23:54:32.275540896Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:54:32.302675 containerd[2550]: time="2025-11-04T23:54:32.302039613Z" level=info msg="Container a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:32.318349 containerd[2550]: time="2025-11-04T23:54:32.318320293Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\"" Nov 4 23:54:32.319629 containerd[2550]: time="2025-11-04T23:54:32.319604937Z" level=info msg="StartContainer for \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\"" Nov 4 23:54:32.320397 containerd[2550]: time="2025-11-04T23:54:32.320358839Z" level=info msg="connecting to shim a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1" address="unix:///run/containerd/s/3b4acdade414f9d97609c73e8b64a5bd15c0d14fc977c70a761eb7bed6368fdd" protocol=ttrpc version=3 Nov 4 23:54:32.354618 systemd[1]: Started cri-containerd-a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1.scope - libcontainer container a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1. Nov 4 23:54:32.383355 systemd[1]: cri-containerd-a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1.scope: Deactivated successfully. Nov 4 23:54:32.384267 containerd[2550]: time="2025-11-04T23:54:32.384240310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" id:\"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" pid:4588 exited_at:{seconds:1762300472 nanos:383841262}" Nov 4 23:54:32.388685 containerd[2550]: time="2025-11-04T23:54:32.388592822Z" level=info msg="received exit event container_id:\"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" id:\"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" pid:4588 exited_at:{seconds:1762300472 nanos:383841262}" Nov 4 23:54:32.392152 containerd[2550]: time="2025-11-04T23:54:32.392120830Z" level=info msg="StartContainer for \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" returns successfully" Nov 4 23:54:32.410108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1-rootfs.mount: Deactivated successfully. Nov 4 23:54:33.294945 containerd[2550]: time="2025-11-04T23:54:33.294891937Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 23:54:33.353956 containerd[2550]: time="2025-11-04T23:54:33.353611845Z" level=info msg="Container 852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:33.375341 containerd[2550]: time="2025-11-04T23:54:33.375313314Z" level=info msg="CreateContainer within sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\"" Nov 4 23:54:33.375947 containerd[2550]: time="2025-11-04T23:54:33.375924198Z" level=info msg="StartContainer for \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\"" Nov 4 23:54:33.376797 containerd[2550]: time="2025-11-04T23:54:33.376762695Z" level=info msg="connecting to shim 852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a" address="unix:///run/containerd/s/3b4acdade414f9d97609c73e8b64a5bd15c0d14fc977c70a761eb7bed6368fdd" protocol=ttrpc version=3 Nov 4 23:54:33.408620 systemd[1]: Started cri-containerd-852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a.scope - libcontainer container 852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a. Nov 4 23:54:33.438067 containerd[2550]: time="2025-11-04T23:54:33.438033343Z" level=info msg="StartContainer for \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" returns successfully" Nov 4 23:54:33.503134 containerd[2550]: time="2025-11-04T23:54:33.503109128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" id:\"96d003f36bf78c92d8967f8395d307afe032f103f03049ace94f8e7936ab05e8\" pid:4654 exited_at:{seconds:1762300473 nanos:502860347}" Nov 4 23:54:33.593422 kubelet[3991]: I1104 23:54:33.593297 3991 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:54:33.637988 systemd[1]: Created slice kubepods-burstable-podd67e57ec_073a_480f_b49d_e5df3749da90.slice - libcontainer container kubepods-burstable-podd67e57ec_073a_480f_b49d_e5df3749da90.slice. Nov 4 23:54:33.647179 systemd[1]: Created slice kubepods-burstable-podbde906a0_2611_48a8_8e10_ac276ac19ce4.slice - libcontainer container kubepods-burstable-podbde906a0_2611_48a8_8e10_ac276ac19ce4.slice. Nov 4 23:54:33.738008 kubelet[3991]: I1104 23:54:33.737948 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bde906a0-2611-48a8-8e10-ac276ac19ce4-config-volume\") pod \"coredns-674b8bbfcf-ct47h\" (UID: \"bde906a0-2611-48a8-8e10-ac276ac19ce4\") " pod="kube-system/coredns-674b8bbfcf-ct47h" Nov 4 23:54:33.738293 kubelet[3991]: I1104 23:54:33.738221 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d67e57ec-073a-480f-b49d-e5df3749da90-config-volume\") pod \"coredns-674b8bbfcf-rmdl7\" (UID: \"d67e57ec-073a-480f-b49d-e5df3749da90\") " pod="kube-system/coredns-674b8bbfcf-rmdl7" Nov 4 23:54:33.739500 kubelet[3991]: I1104 23:54:33.738340 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wn55\" (UniqueName: \"kubernetes.io/projected/d67e57ec-073a-480f-b49d-e5df3749da90-kube-api-access-8wn55\") pod \"coredns-674b8bbfcf-rmdl7\" (UID: \"d67e57ec-073a-480f-b49d-e5df3749da90\") " pod="kube-system/coredns-674b8bbfcf-rmdl7" Nov 4 23:54:33.739500 kubelet[3991]: I1104 23:54:33.738366 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6hj\" (UniqueName: \"kubernetes.io/projected/bde906a0-2611-48a8-8e10-ac276ac19ce4-kube-api-access-nv6hj\") pod \"coredns-674b8bbfcf-ct47h\" (UID: \"bde906a0-2611-48a8-8e10-ac276ac19ce4\") " pod="kube-system/coredns-674b8bbfcf-ct47h" Nov 4 23:54:33.943617 containerd[2550]: time="2025-11-04T23:54:33.943575023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmdl7,Uid:d67e57ec-073a-480f-b49d-e5df3749da90,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:33.952101 containerd[2550]: time="2025-11-04T23:54:33.952076716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ct47h,Uid:bde906a0-2611-48a8-8e10-ac276ac19ce4,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:34.293266 kubelet[3991]: I1104 23:54:34.292927 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bc9pk" podStartSLOduration=10.453328639 podStartE2EDuration="18.292911628s" podCreationTimestamp="2025-11-04 23:54:16 +0000 UTC" firstStartedPulling="2025-11-04 23:54:17.918426079 +0000 UTC m=+4.825516148" lastFinishedPulling="2025-11-04 23:54:25.758009085 +0000 UTC m=+12.665099137" observedRunningTime="2025-11-04 23:54:34.292079619 +0000 UTC m=+21.199169687" watchObservedRunningTime="2025-11-04 23:54:34.292911628 +0000 UTC m=+21.200001698" Nov 4 23:54:35.811755 systemd-networkd[2190]: cilium_host: Link UP Nov 4 23:54:35.813426 systemd-networkd[2190]: cilium_net: Link UP Nov 4 23:54:35.813617 systemd-networkd[2190]: cilium_net: Gained carrier Nov 4 23:54:35.813722 systemd-networkd[2190]: cilium_host: Gained carrier Nov 4 23:54:35.989669 systemd-networkd[2190]: cilium_vxlan: Link UP Nov 4 23:54:35.989680 systemd-networkd[2190]: cilium_vxlan: Gained carrier Nov 4 23:54:36.327507 kernel: NET: Registered PF_ALG protocol family Nov 4 23:54:36.525599 systemd-networkd[2190]: cilium_net: Gained IPv6LL Nov 4 23:54:36.653689 systemd-networkd[2190]: cilium_host: Gained IPv6LL Nov 4 23:54:36.999975 systemd-networkd[2190]: lxc_health: Link UP Nov 4 23:54:37.011387 systemd-networkd[2190]: lxc_health: Gained carrier Nov 4 23:54:37.037584 systemd-networkd[2190]: cilium_vxlan: Gained IPv6LL Nov 4 23:54:37.486403 kernel: eth0: renamed from tmp373dc Nov 4 23:54:37.485131 systemd-networkd[2190]: lxc20f9e3a30772: Link UP Nov 4 23:54:37.488813 systemd-networkd[2190]: lxc20f9e3a30772: Gained carrier Nov 4 23:54:37.504754 systemd-networkd[2190]: lxce10ce7c15703: Link UP Nov 4 23:54:37.507808 kernel: eth0: renamed from tmp78cba Nov 4 23:54:37.507667 systemd-networkd[2190]: lxce10ce7c15703: Gained carrier Nov 4 23:54:38.509637 systemd-networkd[2190]: lxc_health: Gained IPv6LL Nov 4 23:54:38.574351 systemd-networkd[2190]: lxce10ce7c15703: Gained IPv6LL Nov 4 23:54:38.637651 systemd-networkd[2190]: lxc20f9e3a30772: Gained IPv6LL Nov 4 23:54:40.447807 containerd[2550]: time="2025-11-04T23:54:40.447754872Z" level=info msg="connecting to shim 373dc5023bf5503ba659f5c46d3d142f409e884f3966af30f9556bdff100f66d" address="unix:///run/containerd/s/64934022574590aaafea51cca34148207fb72e4a33c49df5efa530610ec39faa" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:40.463812 containerd[2550]: time="2025-11-04T23:54:40.463778604Z" level=info msg="connecting to shim 78cbad43a2e9207c74aaab21ad7b56301728db9e4dcfd0eed53543df91113fd9" address="unix:///run/containerd/s/011ab8ba6460a761f585f58ef92c9a9ff6ca53d1c53b20cf8f94641036769270" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:40.493663 systemd[1]: Started cri-containerd-373dc5023bf5503ba659f5c46d3d142f409e884f3966af30f9556bdff100f66d.scope - libcontainer container 373dc5023bf5503ba659f5c46d3d142f409e884f3966af30f9556bdff100f66d. Nov 4 23:54:40.497713 systemd[1]: Started cri-containerd-78cbad43a2e9207c74aaab21ad7b56301728db9e4dcfd0eed53543df91113fd9.scope - libcontainer container 78cbad43a2e9207c74aaab21ad7b56301728db9e4dcfd0eed53543df91113fd9. Nov 4 23:54:40.554441 containerd[2550]: time="2025-11-04T23:54:40.554418653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmdl7,Uid:d67e57ec-073a-480f-b49d-e5df3749da90,Namespace:kube-system,Attempt:0,} returns sandbox id \"373dc5023bf5503ba659f5c46d3d142f409e884f3966af30f9556bdff100f66d\"" Nov 4 23:54:40.558857 containerd[2550]: time="2025-11-04T23:54:40.558824280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ct47h,Uid:bde906a0-2611-48a8-8e10-ac276ac19ce4,Namespace:kube-system,Attempt:0,} returns sandbox id \"78cbad43a2e9207c74aaab21ad7b56301728db9e4dcfd0eed53543df91113fd9\"" Nov 4 23:54:40.562509 containerd[2550]: time="2025-11-04T23:54:40.562109761Z" level=info msg="CreateContainer within sandbox \"373dc5023bf5503ba659f5c46d3d142f409e884f3966af30f9556bdff100f66d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:54:40.567308 containerd[2550]: time="2025-11-04T23:54:40.567283751Z" level=info msg="CreateContainer within sandbox \"78cbad43a2e9207c74aaab21ad7b56301728db9e4dcfd0eed53543df91113fd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:54:40.583800 containerd[2550]: time="2025-11-04T23:54:40.583773793Z" level=info msg="Container 143f4ff1bb03144fcf90c9f015d8a4c2bfee3bb4658e15a762cd214481b1e956: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:40.594220 containerd[2550]: time="2025-11-04T23:54:40.594193716Z" level=info msg="Container 1d3c9e564c4d8b5f88445723e5f3ea7f4f04ce366cf616a8615d52027ce6cd98: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:40.631610 containerd[2550]: time="2025-11-04T23:54:40.631584676Z" level=info msg="CreateContainer within sandbox \"373dc5023bf5503ba659f5c46d3d142f409e884f3966af30f9556bdff100f66d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"143f4ff1bb03144fcf90c9f015d8a4c2bfee3bb4658e15a762cd214481b1e956\"" Nov 4 23:54:40.632308 containerd[2550]: time="2025-11-04T23:54:40.632087224Z" level=info msg="StartContainer for \"143f4ff1bb03144fcf90c9f015d8a4c2bfee3bb4658e15a762cd214481b1e956\"" Nov 4 23:54:40.633131 containerd[2550]: time="2025-11-04T23:54:40.633093661Z" level=info msg="connecting to shim 143f4ff1bb03144fcf90c9f015d8a4c2bfee3bb4658e15a762cd214481b1e956" address="unix:///run/containerd/s/64934022574590aaafea51cca34148207fb72e4a33c49df5efa530610ec39faa" protocol=ttrpc version=3 Nov 4 23:54:40.639364 containerd[2550]: time="2025-11-04T23:54:40.639337675Z" level=info msg="CreateContainer within sandbox \"78cbad43a2e9207c74aaab21ad7b56301728db9e4dcfd0eed53543df91113fd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d3c9e564c4d8b5f88445723e5f3ea7f4f04ce366cf616a8615d52027ce6cd98\"" Nov 4 23:54:40.639700 containerd[2550]: time="2025-11-04T23:54:40.639667409Z" level=info msg="StartContainer for \"1d3c9e564c4d8b5f88445723e5f3ea7f4f04ce366cf616a8615d52027ce6cd98\"" Nov 4 23:54:40.640466 containerd[2550]: time="2025-11-04T23:54:40.640436021Z" level=info msg="connecting to shim 1d3c9e564c4d8b5f88445723e5f3ea7f4f04ce366cf616a8615d52027ce6cd98" address="unix:///run/containerd/s/011ab8ba6460a761f585f58ef92c9a9ff6ca53d1c53b20cf8f94641036769270" protocol=ttrpc version=3 Nov 4 23:54:40.653811 systemd[1]: Started cri-containerd-143f4ff1bb03144fcf90c9f015d8a4c2bfee3bb4658e15a762cd214481b1e956.scope - libcontainer container 143f4ff1bb03144fcf90c9f015d8a4c2bfee3bb4658e15a762cd214481b1e956. Nov 4 23:54:40.658805 systemd[1]: Started cri-containerd-1d3c9e564c4d8b5f88445723e5f3ea7f4f04ce366cf616a8615d52027ce6cd98.scope - libcontainer container 1d3c9e564c4d8b5f88445723e5f3ea7f4f04ce366cf616a8615d52027ce6cd98. Nov 4 23:54:40.709570 containerd[2550]: time="2025-11-04T23:54:40.709497228Z" level=info msg="StartContainer for \"143f4ff1bb03144fcf90c9f015d8a4c2bfee3bb4658e15a762cd214481b1e956\" returns successfully" Nov 4 23:54:40.710610 containerd[2550]: time="2025-11-04T23:54:40.710086406Z" level=info msg="StartContainer for \"1d3c9e564c4d8b5f88445723e5f3ea7f4f04ce366cf616a8615d52027ce6cd98\" returns successfully" Nov 4 23:54:41.310107 kubelet[3991]: I1104 23:54:41.310021 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ct47h" podStartSLOduration=24.309993773 podStartE2EDuration="24.309993773s" podCreationTimestamp="2025-11-04 23:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:41.307848996 +0000 UTC m=+28.214939063" watchObservedRunningTime="2025-11-04 23:54:41.309993773 +0000 UTC m=+28.217083840" Nov 4 23:54:41.346076 kubelet[3991]: I1104 23:54:41.345807 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rmdl7" podStartSLOduration=24.345794379 podStartE2EDuration="24.345794379s" podCreationTimestamp="2025-11-04 23:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:41.331667746 +0000 UTC m=+28.238757812" watchObservedRunningTime="2025-11-04 23:54:41.345794379 +0000 UTC m=+28.252884446" Nov 4 23:55:46.553796 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:38818.service - OpenSSH per-connection server daemon (10.200.16.10:38818). Nov 4 23:55:47.184749 sshd[5304]: Accepted publickey for core from 10.200.16.10 port 38818 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:47.185945 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:47.190811 systemd-logind[2523]: New session 10 of user core. Nov 4 23:55:47.197635 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:55:47.720755 sshd[5307]: Connection closed by 10.200.16.10 port 38818 Nov 4 23:55:47.721267 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:47.724563 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:38818.service: Deactivated successfully. Nov 4 23:55:47.726191 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:55:47.727079 systemd-logind[2523]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:55:47.728775 systemd-logind[2523]: Removed session 10. Nov 4 23:55:52.831821 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:42888.service - OpenSSH per-connection server daemon (10.200.16.10:42888). Nov 4 23:55:53.462770 sshd[5323]: Accepted publickey for core from 10.200.16.10 port 42888 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:53.463904 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:53.468460 systemd-logind[2523]: New session 11 of user core. Nov 4 23:55:53.472660 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:55:53.955661 sshd[5326]: Connection closed by 10.200.16.10 port 42888 Nov 4 23:55:53.956193 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:53.959790 systemd-logind[2523]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:55:53.960018 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:42888.service: Deactivated successfully. Nov 4 23:55:53.961948 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:55:53.963368 systemd-logind[2523]: Removed session 11. Nov 4 23:55:59.076635 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:42898.service - OpenSSH per-connection server daemon (10.200.16.10:42898). Nov 4 23:55:59.710027 sshd[5338]: Accepted publickey for core from 10.200.16.10 port 42898 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:59.711262 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:59.715759 systemd-logind[2523]: New session 12 of user core. Nov 4 23:55:59.720611 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:56:00.207090 sshd[5341]: Connection closed by 10.200.16.10 port 42898 Nov 4 23:56:00.207642 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:00.211254 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:42898.service: Deactivated successfully. Nov 4 23:56:00.212980 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:56:00.213835 systemd-logind[2523]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:56:00.215007 systemd-logind[2523]: Removed session 12. Nov 4 23:56:00.320243 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:44166.service - OpenSSH per-connection server daemon (10.200.16.10:44166). Nov 4 23:56:00.944558 sshd[5354]: Accepted publickey for core from 10.200.16.10 port 44166 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:00.945779 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:00.950200 systemd-logind[2523]: New session 13 of user core. Nov 4 23:56:00.956608 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:56:01.474978 sshd[5357]: Connection closed by 10.200.16.10 port 44166 Nov 4 23:56:01.475672 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:01.479264 systemd-logind[2523]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:56:01.479630 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:44166.service: Deactivated successfully. Nov 4 23:56:01.481673 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:56:01.483446 systemd-logind[2523]: Removed session 13. Nov 4 23:56:01.585250 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:44180.service - OpenSSH per-connection server daemon (10.200.16.10:44180). Nov 4 23:56:02.212002 sshd[5367]: Accepted publickey for core from 10.200.16.10 port 44180 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:02.213138 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:02.217535 systemd-logind[2523]: New session 14 of user core. Nov 4 23:56:02.226923 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:56:02.714535 sshd[5370]: Connection closed by 10.200.16.10 port 44180 Nov 4 23:56:02.715032 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:02.717887 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:44180.service: Deactivated successfully. Nov 4 23:56:02.720034 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:56:02.721720 systemd-logind[2523]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:56:02.722437 systemd-logind[2523]: Removed session 14. Nov 4 23:56:07.834574 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:44192.service - OpenSSH per-connection server daemon (10.200.16.10:44192). Nov 4 23:56:08.472505 sshd[5382]: Accepted publickey for core from 10.200.16.10 port 44192 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:08.473731 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:08.478461 systemd-logind[2523]: New session 15 of user core. Nov 4 23:56:08.483628 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:56:08.964992 sshd[5385]: Connection closed by 10.200.16.10 port 44192 Nov 4 23:56:08.965557 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:08.969026 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:44192.service: Deactivated successfully. Nov 4 23:56:08.970886 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:56:08.971997 systemd-logind[2523]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:56:08.972975 systemd-logind[2523]: Removed session 15. Nov 4 23:56:14.076735 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:46088.service - OpenSSH per-connection server daemon (10.200.16.10:46088). Nov 4 23:56:14.709165 sshd[5399]: Accepted publickey for core from 10.200.16.10 port 46088 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:14.710216 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:14.714388 systemd-logind[2523]: New session 16 of user core. Nov 4 23:56:14.723614 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:56:15.206591 sshd[5402]: Connection closed by 10.200.16.10 port 46088 Nov 4 23:56:15.208126 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:15.211900 systemd-logind[2523]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:56:15.212252 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:46088.service: Deactivated successfully. Nov 4 23:56:15.214102 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:56:15.215727 systemd-logind[2523]: Removed session 16. Nov 4 23:56:20.320838 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:54374.service - OpenSSH per-connection server daemon (10.200.16.10:54374). Nov 4 23:56:20.953286 sshd[5416]: Accepted publickey for core from 10.200.16.10 port 54374 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:20.954389 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:20.958560 systemd-logind[2523]: New session 17 of user core. Nov 4 23:56:20.965632 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:56:21.447046 sshd[5419]: Connection closed by 10.200.16.10 port 54374 Nov 4 23:56:21.447571 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:21.450936 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:54374.service: Deactivated successfully. Nov 4 23:56:21.452774 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:56:21.453642 systemd-logind[2523]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:56:21.454885 systemd-logind[2523]: Removed session 17. Nov 4 23:56:21.558341 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:54390.service - OpenSSH per-connection server daemon (10.200.16.10:54390). Nov 4 23:56:22.189028 sshd[5431]: Accepted publickey for core from 10.200.16.10 port 54390 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:22.190127 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:22.194390 systemd-logind[2523]: New session 18 of user core. Nov 4 23:56:22.198750 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:56:22.761541 sshd[5434]: Connection closed by 10.200.16.10 port 54390 Nov 4 23:56:22.762063 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:22.765611 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:54390.service: Deactivated successfully. Nov 4 23:56:22.767454 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:56:22.768432 systemd-logind[2523]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:56:22.769803 systemd-logind[2523]: Removed session 18. Nov 4 23:56:22.877353 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:54406.service - OpenSSH per-connection server daemon (10.200.16.10:54406). Nov 4 23:56:23.514068 sshd[5444]: Accepted publickey for core from 10.200.16.10 port 54406 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:23.515186 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:23.519559 systemd-logind[2523]: New session 19 of user core. Nov 4 23:56:23.524636 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:56:24.552619 sshd[5447]: Connection closed by 10.200.16.10 port 54406 Nov 4 23:56:24.553170 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:24.556788 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:54406.service: Deactivated successfully. Nov 4 23:56:24.558702 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:56:24.559515 systemd-logind[2523]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:56:24.560982 systemd-logind[2523]: Removed session 19. Nov 4 23:56:24.665368 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:54416.service - OpenSSH per-connection server daemon (10.200.16.10:54416). Nov 4 23:56:25.294459 sshd[5464]: Accepted publickey for core from 10.200.16.10 port 54416 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:25.295573 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:25.299546 systemd-logind[2523]: New session 20 of user core. Nov 4 23:56:25.306618 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:56:25.864018 sshd[5467]: Connection closed by 10.200.16.10 port 54416 Nov 4 23:56:25.864542 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:25.867876 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:54416.service: Deactivated successfully. Nov 4 23:56:25.869785 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:56:25.870854 systemd-logind[2523]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:56:25.872038 systemd-logind[2523]: Removed session 20. Nov 4 23:56:25.978360 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:54420.service - OpenSSH per-connection server daemon (10.200.16.10:54420). Nov 4 23:56:26.609231 sshd[5477]: Accepted publickey for core from 10.200.16.10 port 54420 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:26.610348 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:26.614773 systemd-logind[2523]: New session 21 of user core. Nov 4 23:56:26.620637 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:56:27.099847 sshd[5480]: Connection closed by 10.200.16.10 port 54420 Nov 4 23:56:27.100334 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:27.103595 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:54420.service: Deactivated successfully. Nov 4 23:56:27.105381 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:56:27.106200 systemd-logind[2523]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:56:27.107329 systemd-logind[2523]: Removed session 21. Nov 4 23:56:32.216599 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:37364.service - OpenSSH per-connection server daemon (10.200.16.10:37364). Nov 4 23:56:32.845272 sshd[5492]: Accepted publickey for core from 10.200.16.10 port 37364 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:32.846374 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:32.850574 systemd-logind[2523]: New session 22 of user core. Nov 4 23:56:32.858610 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:56:33.344414 sshd[5495]: Connection closed by 10.200.16.10 port 37364 Nov 4 23:56:33.344933 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:33.348299 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:37364.service: Deactivated successfully. Nov 4 23:56:33.350091 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:56:33.350944 systemd-logind[2523]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:56:33.352258 systemd-logind[2523]: Removed session 22. Nov 4 23:56:38.460754 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:37378.service - OpenSSH per-connection server daemon (10.200.16.10:37378). Nov 4 23:56:39.093722 sshd[5507]: Accepted publickey for core from 10.200.16.10 port 37378 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:39.094838 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:39.099271 systemd-logind[2523]: New session 23 of user core. Nov 4 23:56:39.103666 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:56:39.589056 sshd[5512]: Connection closed by 10.200.16.10 port 37378 Nov 4 23:56:39.589606 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:39.592768 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:37378.service: Deactivated successfully. Nov 4 23:56:39.594642 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:56:39.595884 systemd-logind[2523]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:56:39.596899 systemd-logind[2523]: Removed session 23. Nov 4 23:56:44.699536 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:36346.service - OpenSSH per-connection server daemon (10.200.16.10:36346). Nov 4 23:56:45.332540 sshd[5524]: Accepted publickey for core from 10.200.16.10 port 36346 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:45.333533 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:45.337987 systemd-logind[2523]: New session 24 of user core. Nov 4 23:56:45.343634 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:56:45.820913 sshd[5527]: Connection closed by 10.200.16.10 port 36346 Nov 4 23:56:45.821465 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:45.824843 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:36346.service: Deactivated successfully. Nov 4 23:56:45.826821 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:56:45.827608 systemd-logind[2523]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:56:45.828915 systemd-logind[2523]: Removed session 24. Nov 4 23:56:50.941843 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:40656.service - OpenSSH per-connection server daemon (10.200.16.10:40656). Nov 4 23:56:51.576392 sshd[5541]: Accepted publickey for core from 10.200.16.10 port 40656 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:51.577575 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:51.582194 systemd-logind[2523]: New session 25 of user core. Nov 4 23:56:51.589628 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 23:56:52.069680 sshd[5544]: Connection closed by 10.200.16.10 port 40656 Nov 4 23:56:52.070217 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:52.073665 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:40656.service: Deactivated successfully. Nov 4 23:56:52.075521 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 23:56:52.076270 systemd-logind[2523]: Session 25 logged out. Waiting for processes to exit. Nov 4 23:56:52.077401 systemd-logind[2523]: Removed session 25. Nov 4 23:56:52.187416 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:40658.service - OpenSSH per-connection server daemon (10.200.16.10:40658). Nov 4 23:56:52.818277 sshd[5556]: Accepted publickey for core from 10.200.16.10 port 40658 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:52.819416 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:52.823833 systemd-logind[2523]: New session 26 of user core. Nov 4 23:56:52.831619 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 23:56:54.459866 containerd[2550]: time="2025-11-04T23:56:54.459046011Z" level=info msg="StopContainer for \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" with timeout 30 (s)" Nov 4 23:56:54.460982 containerd[2550]: time="2025-11-04T23:56:54.460948168Z" level=info msg="Stop container \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" with signal terminated" Nov 4 23:56:54.470737 systemd[1]: cri-containerd-8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5.scope: Deactivated successfully. Nov 4 23:56:54.473685 containerd[2550]: time="2025-11-04T23:56:54.473615707Z" level=info msg="received exit event container_id:\"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" id:\"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" pid:4520 exited_at:{seconds:1762300614 nanos:473093866}" Nov 4 23:56:54.474224 containerd[2550]: time="2025-11-04T23:56:54.473777097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" id:\"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" pid:4520 exited_at:{seconds:1762300614 nanos:473093866}" Nov 4 23:56:54.479901 containerd[2550]: time="2025-11-04T23:56:54.479688961Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:56:54.485313 containerd[2550]: time="2025-11-04T23:56:54.485286656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" id:\"c4a226da7a5285fd1873ad0ed99d8dce6b133eb3d8daf56ce70766382dd38b87\" pid:5577 exited_at:{seconds:1762300614 nanos:484725324}" Nov 4 23:56:54.487735 containerd[2550]: time="2025-11-04T23:56:54.487628470Z" level=info msg="StopContainer for \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" with timeout 2 (s)" Nov 4 23:56:54.487888 containerd[2550]: time="2025-11-04T23:56:54.487868948Z" level=info msg="Stop container \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" with signal terminated" Nov 4 23:56:54.499739 systemd-networkd[2190]: lxc_health: Link DOWN Nov 4 23:56:54.499746 systemd-networkd[2190]: lxc_health: Lost carrier Nov 4 23:56:54.501871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5-rootfs.mount: Deactivated successfully. Nov 4 23:56:54.513844 systemd[1]: cri-containerd-852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a.scope: Deactivated successfully. Nov 4 23:56:54.514273 systemd[1]: cri-containerd-852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a.scope: Consumed 5.325s CPU time, 124.8M memory peak, 136K read from disk, 13.3M written to disk. Nov 4 23:56:54.515823 containerd[2550]: time="2025-11-04T23:56:54.515780362Z" level=info msg="received exit event container_id:\"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" id:\"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" pid:4626 exited_at:{seconds:1762300614 nanos:515596412}" Nov 4 23:56:54.515823 containerd[2550]: time="2025-11-04T23:56:54.515817723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" id:\"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" pid:4626 exited_at:{seconds:1762300614 nanos:515596412}" Nov 4 23:56:54.534846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a-rootfs.mount: Deactivated successfully. Nov 4 23:56:54.634284 containerd[2550]: time="2025-11-04T23:56:54.634256564Z" level=info msg="StopContainer for \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" returns successfully" Nov 4 23:56:54.634821 containerd[2550]: time="2025-11-04T23:56:54.634800782Z" level=info msg="StopPodSandbox for \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\"" Nov 4 23:56:54.634877 containerd[2550]: time="2025-11-04T23:56:54.634857830Z" level=info msg="Container to stop \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:54.634877 containerd[2550]: time="2025-11-04T23:56:54.634870559Z" level=info msg="Container to stop \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:54.634933 containerd[2550]: time="2025-11-04T23:56:54.634880876Z" level=info msg="Container to stop \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:54.634933 containerd[2550]: time="2025-11-04T23:56:54.634889544Z" level=info msg="Container to stop \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:54.634933 containerd[2550]: time="2025-11-04T23:56:54.634898476Z" level=info msg="Container to stop \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:54.640841 containerd[2550]: time="2025-11-04T23:56:54.640815837Z" level=info msg="StopContainer for \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" returns successfully" Nov 4 23:56:54.641585 containerd[2550]: time="2025-11-04T23:56:54.641564523Z" level=info msg="StopPodSandbox for \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\"" Nov 4 23:56:54.641659 containerd[2550]: time="2025-11-04T23:56:54.641618056Z" level=info msg="Container to stop \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:54.642921 systemd[1]: cri-containerd-1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f.scope: Deactivated successfully. Nov 4 23:56:54.644207 containerd[2550]: time="2025-11-04T23:56:54.644179611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" id:\"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" pid:4188 exit_status:137 exited_at:{seconds:1762300614 nanos:643499218}" Nov 4 23:56:54.653779 systemd[1]: cri-containerd-efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa.scope: Deactivated successfully. Nov 4 23:56:54.672369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f-rootfs.mount: Deactivated successfully. Nov 4 23:56:54.680232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa-rootfs.mount: Deactivated successfully. Nov 4 23:56:54.698003 containerd[2550]: time="2025-11-04T23:56:54.697854403Z" level=info msg="shim disconnected" id=efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa namespace=k8s.io Nov 4 23:56:54.698003 containerd[2550]: time="2025-11-04T23:56:54.697878955Z" level=warning msg="cleaning up after shim disconnected" id=efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa namespace=k8s.io Nov 4 23:56:54.698003 containerd[2550]: time="2025-11-04T23:56:54.697887025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:56:54.700091 containerd[2550]: time="2025-11-04T23:56:54.700062364Z" level=info msg="shim disconnected" id=1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f namespace=k8s.io Nov 4 23:56:54.700091 containerd[2550]: time="2025-11-04T23:56:54.700091897Z" level=warning msg="cleaning up after shim disconnected" id=1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f namespace=k8s.io Nov 4 23:56:54.700235 containerd[2550]: time="2025-11-04T23:56:54.700099660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:56:54.700299 containerd[2550]: time="2025-11-04T23:56:54.700262098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" id:\"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" pid:4202 exit_status:137 exited_at:{seconds:1762300614 nanos:655847854}" Nov 4 23:56:54.700462 containerd[2550]: time="2025-11-04T23:56:54.700393414Z" level=info msg="received exit event sandbox_id:\"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" exit_status:137 exited_at:{seconds:1762300614 nanos:655847854}" Nov 4 23:56:54.703645 containerd[2550]: time="2025-11-04T23:56:54.703617101Z" level=info msg="received exit event sandbox_id:\"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" exit_status:137 exited_at:{seconds:1762300614 nanos:643499218}" Nov 4 23:56:54.704398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f-shm.mount: Deactivated successfully. Nov 4 23:56:54.705113 containerd[2550]: time="2025-11-04T23:56:54.705089276Z" level=info msg="TearDown network for sandbox \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" successfully" Nov 4 23:56:54.705168 containerd[2550]: time="2025-11-04T23:56:54.705113364Z" level=info msg="StopPodSandbox for \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" returns successfully" Nov 4 23:56:54.705513 containerd[2550]: time="2025-11-04T23:56:54.705211592Z" level=info msg="TearDown network for sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" successfully" Nov 4 23:56:54.705513 containerd[2550]: time="2025-11-04T23:56:54.705223231Z" level=info msg="StopPodSandbox for \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" returns successfully" Nov 4 23:56:54.836669 kubelet[3991]: I1104 23:56:54.836594 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-bpf-maps\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.836669 kubelet[3991]: I1104 23:56:54.836627 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-xtables-lock\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.836669 kubelet[3991]: I1104 23:56:54.836651 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-hubble-tls\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.837976 kubelet[3991]: I1104 23:56:54.836695 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cni-path\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.837976 kubelet[3991]: I1104 23:56:54.836715 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0eaacd8-2398-4038-9439-9c6d8004e526-clustermesh-secrets\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.837976 kubelet[3991]: I1104 23:56:54.836734 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-config-path\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.837976 kubelet[3991]: I1104 23:56:54.836749 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-lib-modules\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.837976 kubelet[3991]: I1104 23:56:54.836766 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvbq2\" (UniqueName: \"kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-kube-api-access-wvbq2\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.837976 kubelet[3991]: I1104 23:56:54.836786 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96qnk\" (UniqueName: \"kubernetes.io/projected/faebeebb-fb31-4cba-91fc-b1b742dc59cb-kube-api-access-96qnk\") pod \"faebeebb-fb31-4cba-91fc-b1b742dc59cb\" (UID: \"faebeebb-fb31-4cba-91fc-b1b742dc59cb\") " Nov 4 23:56:54.838123 kubelet[3991]: I1104 23:56:54.836804 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-net\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.838123 kubelet[3991]: I1104 23:56:54.836820 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-etc-cni-netd\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.838123 kubelet[3991]: I1104 23:56:54.836835 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-cgroup\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.838123 kubelet[3991]: I1104 23:56:54.836852 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-kernel\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.838123 kubelet[3991]: I1104 23:56:54.836869 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-hostproc\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.838123 kubelet[3991]: I1104 23:56:54.836888 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-run\") pod \"e0eaacd8-2398-4038-9439-9c6d8004e526\" (UID: \"e0eaacd8-2398-4038-9439-9c6d8004e526\") " Nov 4 23:56:54.838273 kubelet[3991]: I1104 23:56:54.836907 3991 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/faebeebb-fb31-4cba-91fc-b1b742dc59cb-cilium-config-path\") pod \"faebeebb-fb31-4cba-91fc-b1b742dc59cb\" (UID: \"faebeebb-fb31-4cba-91fc-b1b742dc59cb\") " Nov 4 23:56:54.839063 kubelet[3991]: I1104 23:56:54.839030 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faebeebb-fb31-4cba-91fc-b1b742dc59cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "faebeebb-fb31-4cba-91fc-b1b742dc59cb" (UID: "faebeebb-fb31-4cba-91fc-b1b742dc59cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:56:54.839393 kubelet[3991]: I1104 23:56:54.839337 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.839393 kubelet[3991]: I1104 23:56:54.839372 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.839878 kubelet[3991]: I1104 23:56:54.839825 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cni-path" (OuterVolumeSpecName: "cni-path") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.841078 kubelet[3991]: I1104 23:56:54.841028 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.842045 kubelet[3991]: I1104 23:56:54.841521 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.842045 kubelet[3991]: I1104 23:56:54.841556 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.842045 kubelet[3991]: I1104 23:56:54.841571 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.842045 kubelet[3991]: I1104 23:56:54.841587 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.842045 kubelet[3991]: I1104 23:56:54.841601 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-hostproc" (OuterVolumeSpecName: "hostproc") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.842220 kubelet[3991]: I1104 23:56:54.841623 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:54.843563 kubelet[3991]: I1104 23:56:54.843532 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-kube-api-access-wvbq2" (OuterVolumeSpecName: "kube-api-access-wvbq2") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "kube-api-access-wvbq2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:54.844308 kubelet[3991]: I1104 23:56:54.844287 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faebeebb-fb31-4cba-91fc-b1b742dc59cb-kube-api-access-96qnk" (OuterVolumeSpecName: "kube-api-access-96qnk") pod "faebeebb-fb31-4cba-91fc-b1b742dc59cb" (UID: "faebeebb-fb31-4cba-91fc-b1b742dc59cb"). InnerVolumeSpecName "kube-api-access-96qnk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:54.844386 kubelet[3991]: I1104 23:56:54.844373 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:54.845051 kubelet[3991]: I1104 23:56:54.845017 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:56:54.845395 kubelet[3991]: I1104 23:56:54.845376 3991 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0eaacd8-2398-4038-9439-9c6d8004e526-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e0eaacd8-2398-4038-9439-9c6d8004e526" (UID: "e0eaacd8-2398-4038-9439-9c6d8004e526"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:56:54.937639 kubelet[3991]: I1104 23:56:54.937616 3991 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-lib-modules\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937639 kubelet[3991]: I1104 23:56:54.937640 3991 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wvbq2\" (UniqueName: \"kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-kube-api-access-wvbq2\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937654 3991 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96qnk\" (UniqueName: \"kubernetes.io/projected/faebeebb-fb31-4cba-91fc-b1b742dc59cb-kube-api-access-96qnk\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937663 3991 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-net\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937674 3991 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-etc-cni-netd\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937685 3991 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-cgroup\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937695 3991 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-host-proc-sys-kernel\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937704 3991 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-hostproc\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937713 3991 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-run\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937756 kubelet[3991]: I1104 23:56:54.937723 3991 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/faebeebb-fb31-4cba-91fc-b1b742dc59cb-cilium-config-path\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937907 kubelet[3991]: I1104 23:56:54.937730 3991 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-bpf-maps\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937907 kubelet[3991]: I1104 23:56:54.937739 3991 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-xtables-lock\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937907 kubelet[3991]: I1104 23:56:54.937749 3991 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0eaacd8-2398-4038-9439-9c6d8004e526-hubble-tls\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937907 kubelet[3991]: I1104 23:56:54.937757 3991 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0eaacd8-2398-4038-9439-9c6d8004e526-cni-path\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937907 kubelet[3991]: I1104 23:56:54.937767 3991 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0eaacd8-2398-4038-9439-9c6d8004e526-clustermesh-secrets\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:54.937907 kubelet[3991]: I1104 23:56:54.937776 3991 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0eaacd8-2398-4038-9439-9c6d8004e526-cilium-config-path\") on node \"ci-4487.0.0-n-a41d436e4a\" DevicePath \"\"" Nov 4 23:56:55.214716 systemd[1]: Removed slice kubepods-burstable-pode0eaacd8_2398_4038_9439_9c6d8004e526.slice - libcontainer container kubepods-burstable-pode0eaacd8_2398_4038_9439_9c6d8004e526.slice. Nov 4 23:56:55.215014 systemd[1]: kubepods-burstable-pode0eaacd8_2398_4038_9439_9c6d8004e526.slice: Consumed 5.398s CPU time, 125.3M memory peak, 136K read from disk, 13.3M written to disk. Nov 4 23:56:55.216350 systemd[1]: Removed slice kubepods-besteffort-podfaebeebb_fb31_4cba_91fc_b1b742dc59cb.slice - libcontainer container kubepods-besteffort-podfaebeebb_fb31_4cba_91fc_b1b742dc59cb.slice. Nov 4 23:56:55.501432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa-shm.mount: Deactivated successfully. Nov 4 23:56:55.501988 systemd[1]: var-lib-kubelet-pods-faebeebb\x2dfb31\x2d4cba\x2d91fc\x2db1b742dc59cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d96qnk.mount: Deactivated successfully. Nov 4 23:56:55.502068 systemd[1]: var-lib-kubelet-pods-e0eaacd8\x2d2398\x2d4038\x2d9439\x2d9c6d8004e526-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwvbq2.mount: Deactivated successfully. Nov 4 23:56:55.502126 systemd[1]: var-lib-kubelet-pods-e0eaacd8\x2d2398\x2d4038\x2d9439\x2d9c6d8004e526-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 23:56:55.502185 systemd[1]: var-lib-kubelet-pods-e0eaacd8\x2d2398\x2d4038\x2d9439\x2d9c6d8004e526-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 23:56:55.516562 kubelet[3991]: I1104 23:56:55.515567 3991 scope.go:117] "RemoveContainer" containerID="8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5" Nov 4 23:56:55.518153 containerd[2550]: time="2025-11-04T23:56:55.518117312Z" level=info msg="RemoveContainer for \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\"" Nov 4 23:56:55.537243 containerd[2550]: time="2025-11-04T23:56:55.537216165Z" level=info msg="RemoveContainer for \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" returns successfully" Nov 4 23:56:55.537449 kubelet[3991]: I1104 23:56:55.537410 3991 scope.go:117] "RemoveContainer" containerID="8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5" Nov 4 23:56:55.537651 containerd[2550]: time="2025-11-04T23:56:55.537622611Z" level=error msg="ContainerStatus for \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\": not found" Nov 4 23:56:55.537750 kubelet[3991]: E1104 23:56:55.537729 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\": not found" containerID="8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5" Nov 4 23:56:55.537800 kubelet[3991]: I1104 23:56:55.537755 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5"} err="failed to get container status \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b457e4236be6d3da952b9cf9849ec91c014aa508c2512c3a90f761aead3c0b5\": not found" Nov 4 23:56:55.537800 kubelet[3991]: I1104 23:56:55.537788 3991 scope.go:117] "RemoveContainer" containerID="852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a" Nov 4 23:56:55.539197 containerd[2550]: time="2025-11-04T23:56:55.539165483Z" level=info msg="RemoveContainer for \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\"" Nov 4 23:56:55.556497 containerd[2550]: time="2025-11-04T23:56:55.556460178Z" level=info msg="RemoveContainer for \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" returns successfully" Nov 4 23:56:55.556695 kubelet[3991]: I1104 23:56:55.556660 3991 scope.go:117] "RemoveContainer" containerID="a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1" Nov 4 23:56:55.557917 containerd[2550]: time="2025-11-04T23:56:55.557876795Z" level=info msg="RemoveContainer for \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\"" Nov 4 23:56:55.568345 containerd[2550]: time="2025-11-04T23:56:55.568310043Z" level=info msg="RemoveContainer for \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" returns successfully" Nov 4 23:56:55.568502 kubelet[3991]: I1104 23:56:55.568468 3991 scope.go:117] "RemoveContainer" containerID="bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff" Nov 4 23:56:55.570212 containerd[2550]: time="2025-11-04T23:56:55.570176922Z" level=info msg="RemoveContainer for \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\"" Nov 4 23:56:55.578325 containerd[2550]: time="2025-11-04T23:56:55.578299795Z" level=info msg="RemoveContainer for \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" returns successfully" Nov 4 23:56:55.578490 kubelet[3991]: I1104 23:56:55.578454 3991 scope.go:117] "RemoveContainer" containerID="9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd" Nov 4 23:56:55.580017 containerd[2550]: time="2025-11-04T23:56:55.579992015Z" level=info msg="RemoveContainer for \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\"" Nov 4 23:56:55.602167 containerd[2550]: time="2025-11-04T23:56:55.602079528Z" level=info msg="RemoveContainer for \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" returns successfully" Nov 4 23:56:55.602291 kubelet[3991]: I1104 23:56:55.602276 3991 scope.go:117] "RemoveContainer" containerID="2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10" Nov 4 23:56:55.603609 containerd[2550]: time="2025-11-04T23:56:55.603564604Z" level=info msg="RemoveContainer for \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\"" Nov 4 23:56:55.615966 containerd[2550]: time="2025-11-04T23:56:55.615935219Z" level=info msg="RemoveContainer for \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" returns successfully" Nov 4 23:56:55.616094 kubelet[3991]: I1104 23:56:55.616073 3991 scope.go:117] "RemoveContainer" containerID="852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a" Nov 4 23:56:55.616320 containerd[2550]: time="2025-11-04T23:56:55.616298757Z" level=error msg="ContainerStatus for \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\": not found" Nov 4 23:56:55.616427 kubelet[3991]: E1104 23:56:55.616409 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\": not found" containerID="852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a" Nov 4 23:56:55.616513 kubelet[3991]: I1104 23:56:55.616431 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a"} err="failed to get container status \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\": rpc error: code = NotFound desc = an error occurred when try to find container \"852548e88e4880a850441aa7ff4e8f146943ea9ed19db6ce2ed6570e6186b72a\": not found" Nov 4 23:56:55.616513 kubelet[3991]: I1104 23:56:55.616448 3991 scope.go:117] "RemoveContainer" containerID="a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1" Nov 4 23:56:55.616674 containerd[2550]: time="2025-11-04T23:56:55.616636843Z" level=error msg="ContainerStatus for \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\": not found" Nov 4 23:56:55.616776 kubelet[3991]: E1104 23:56:55.616754 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\": not found" containerID="a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1" Nov 4 23:56:55.616819 kubelet[3991]: I1104 23:56:55.616774 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1"} err="failed to get container status \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a053d428df0c60fd88fb572dc1e622c38bf23a4b0792562fec7a4547141f59e1\": not found" Nov 4 23:56:55.616819 kubelet[3991]: I1104 23:56:55.616790 3991 scope.go:117] "RemoveContainer" containerID="bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff" Nov 4 23:56:55.616987 containerd[2550]: time="2025-11-04T23:56:55.616946044Z" level=error msg="ContainerStatus for \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\": not found" Nov 4 23:56:55.617111 kubelet[3991]: E1104 23:56:55.617094 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\": not found" containerID="bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff" Nov 4 23:56:55.617150 kubelet[3991]: I1104 23:56:55.617123 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff"} err="failed to get container status \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcff4c89c89d731ecee52c638675b4a4230d3f2a53f686a17d6a03b32ad28cff\": not found" Nov 4 23:56:55.617150 kubelet[3991]: I1104 23:56:55.617139 3991 scope.go:117] "RemoveContainer" containerID="9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd" Nov 4 23:56:55.617291 containerd[2550]: time="2025-11-04T23:56:55.617270686Z" level=error msg="ContainerStatus for \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\": not found" Nov 4 23:56:55.617461 kubelet[3991]: E1104 23:56:55.617435 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\": not found" containerID="9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd" Nov 4 23:56:55.617524 kubelet[3991]: I1104 23:56:55.617465 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd"} err="failed to get container status \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\": rpc error: code = NotFound desc = an error occurred when try to find container \"9661b7a3859351e97400facf99af5f32cd8a6b262042994df2a5deb47498dacd\": not found" Nov 4 23:56:55.617524 kubelet[3991]: I1104 23:56:55.617512 3991 scope.go:117] "RemoveContainer" containerID="2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10" Nov 4 23:56:55.617715 containerd[2550]: time="2025-11-04T23:56:55.617680497Z" level=error msg="ContainerStatus for \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\": not found" Nov 4 23:56:55.617814 kubelet[3991]: E1104 23:56:55.617771 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\": not found" containerID="2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10" Nov 4 23:56:55.617814 kubelet[3991]: I1104 23:56:55.617795 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10"} err="failed to get container status \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\": rpc error: code = NotFound desc = an error occurred when try to find container \"2331a81670e0cf949f2e3021d569da664f321e3a0b759c00c5af0d0b4fb13f10\": not found" Nov 4 23:56:56.497914 sshd[5559]: Connection closed by 10.200.16.10 port 40658 Nov 4 23:56:56.498535 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:56.502102 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:40658.service: Deactivated successfully. Nov 4 23:56:56.503971 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 23:56:56.504679 systemd-logind[2523]: Session 26 logged out. Waiting for processes to exit. Nov 4 23:56:56.505978 systemd-logind[2523]: Removed session 26. Nov 4 23:56:56.612602 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:40666.service - OpenSSH per-connection server daemon (10.200.16.10:40666). Nov 4 23:56:57.208453 kubelet[3991]: I1104 23:56:57.208415 3991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0eaacd8-2398-4038-9439-9c6d8004e526" path="/var/lib/kubelet/pods/e0eaacd8-2398-4038-9439-9c6d8004e526/volumes" Nov 4 23:56:57.208956 kubelet[3991]: I1104 23:56:57.208935 3991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faebeebb-fb31-4cba-91fc-b1b742dc59cb" path="/var/lib/kubelet/pods/faebeebb-fb31-4cba-91fc-b1b742dc59cb/volumes" Nov 4 23:56:57.242169 sshd[5712]: Accepted publickey for core from 10.200.16.10 port 40666 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:57.243580 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:57.248281 systemd-logind[2523]: New session 27 of user core. Nov 4 23:56:57.252645 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 4 23:56:58.031743 systemd[1]: Created slice kubepods-burstable-poda3cf4e6c_9681_435a_a34d_767036c99332.slice - libcontainer container kubepods-burstable-poda3cf4e6c_9681_435a_a34d_767036c99332.slice. Nov 4 23:56:58.114620 sshd[5715]: Connection closed by 10.200.16.10 port 40666 Nov 4 23:56:58.115110 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:58.119089 systemd-logind[2523]: Session 27 logged out. Waiting for processes to exit. Nov 4 23:56:58.119232 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:40666.service: Deactivated successfully. Nov 4 23:56:58.121259 systemd[1]: session-27.scope: Deactivated successfully. Nov 4 23:56:58.122830 systemd-logind[2523]: Removed session 27. Nov 4 23:56:58.153330 kubelet[3991]: I1104 23:56:58.153303 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3cf4e6c-9681-435a-a34d-767036c99332-hubble-tls\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153525 kubelet[3991]: I1104 23:56:58.153337 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3cf4e6c-9681-435a-a34d-767036c99332-clustermesh-secrets\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153525 kubelet[3991]: I1104 23:56:58.153358 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-host-proc-sys-kernel\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153525 kubelet[3991]: I1104 23:56:58.153377 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-cilium-run\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153525 kubelet[3991]: I1104 23:56:58.153393 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-cilium-cgroup\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153525 kubelet[3991]: I1104 23:56:58.153410 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-etc-cni-netd\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153525 kubelet[3991]: I1104 23:56:58.153429 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-lib-modules\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153715 kubelet[3991]: I1104 23:56:58.153446 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3cf4e6c-9681-435a-a34d-767036c99332-cilium-config-path\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153715 kubelet[3991]: I1104 23:56:58.153493 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-cni-path\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153715 kubelet[3991]: I1104 23:56:58.153520 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-xtables-lock\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153715 kubelet[3991]: I1104 23:56:58.153564 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-hostproc\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153715 kubelet[3991]: I1104 23:56:58.153587 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-host-proc-sys-net\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153715 kubelet[3991]: I1104 23:56:58.153606 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645vh\" (UniqueName: \"kubernetes.io/projected/a3cf4e6c-9681-435a-a34d-767036c99332-kube-api-access-645vh\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153851 kubelet[3991]: I1104 23:56:58.153640 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3cf4e6c-9681-435a-a34d-767036c99332-bpf-maps\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.153851 kubelet[3991]: I1104 23:56:58.153661 3991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a3cf4e6c-9681-435a-a34d-767036c99332-cilium-ipsec-secrets\") pod \"cilium-5zfk4\" (UID: \"a3cf4e6c-9681-435a-a34d-767036c99332\") " pod="kube-system/cilium-5zfk4" Nov 4 23:56:58.228742 systemd[1]: Started sshd@25-10.200.8.39:22-10.200.16.10:40680.service - OpenSSH per-connection server daemon (10.200.16.10:40680). Nov 4 23:56:58.298148 kubelet[3991]: E1104 23:56:58.298066 3991 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 23:56:58.337948 containerd[2550]: time="2025-11-04T23:56:58.337921103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zfk4,Uid:a3cf4e6c-9681-435a-a34d-767036c99332,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:58.372954 containerd[2550]: time="2025-11-04T23:56:58.372914042Z" level=info msg="connecting to shim 7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016" address="unix:///run/containerd/s/619682cd072219ea57fdc2a4df39bef8f52cf7056ac37d84b65ce609bf9f5c05" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:58.396642 systemd[1]: Started cri-containerd-7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016.scope - libcontainer container 7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016. Nov 4 23:56:58.420820 containerd[2550]: time="2025-11-04T23:56:58.420793091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zfk4,Uid:a3cf4e6c-9681-435a-a34d-767036c99332,Namespace:kube-system,Attempt:0,} returns sandbox id \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\"" Nov 4 23:56:58.428908 containerd[2550]: time="2025-11-04T23:56:58.428879682Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:56:58.444857 containerd[2550]: time="2025-11-04T23:56:58.444833895Z" level=info msg="Container 834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:58.457327 containerd[2550]: time="2025-11-04T23:56:58.457304350Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90\"" Nov 4 23:56:58.458448 containerd[2550]: time="2025-11-04T23:56:58.457672415Z" level=info msg="StartContainer for \"834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90\"" Nov 4 23:56:58.459143 containerd[2550]: time="2025-11-04T23:56:58.459119375Z" level=info msg="connecting to shim 834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90" address="unix:///run/containerd/s/619682cd072219ea57fdc2a4df39bef8f52cf7056ac37d84b65ce609bf9f5c05" protocol=ttrpc version=3 Nov 4 23:56:58.476612 systemd[1]: Started cri-containerd-834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90.scope - libcontainer container 834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90. Nov 4 23:56:58.502583 containerd[2550]: time="2025-11-04T23:56:58.502556414Z" level=info msg="StartContainer for \"834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90\" returns successfully" Nov 4 23:56:58.507150 systemd[1]: cri-containerd-834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90.scope: Deactivated successfully. Nov 4 23:56:58.510545 containerd[2550]: time="2025-11-04T23:56:58.510506056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90\" id:\"834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90\" pid:5790 exited_at:{seconds:1762300618 nanos:510159190}" Nov 4 23:56:58.510677 containerd[2550]: time="2025-11-04T23:56:58.510657340Z" level=info msg="received exit event container_id:\"834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90\" id:\"834a9eadfd8a1e31bf5052b490372107b13d85f318eb62cecc591ed5b8f77c90\" pid:5790 exited_at:{seconds:1762300618 nanos:510159190}" Nov 4 23:56:58.856865 sshd[5726]: Accepted publickey for core from 10.200.16.10 port 40680 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:56:58.858125 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:58.863207 systemd-logind[2523]: New session 28 of user core. Nov 4 23:56:58.869619 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 4 23:56:59.299538 sshd[5825]: Connection closed by 10.200.16.10 port 40680 Nov 4 23:56:59.300033 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:59.303253 systemd[1]: sshd@25-10.200.8.39:22-10.200.16.10:40680.service: Deactivated successfully. Nov 4 23:56:59.305102 systemd[1]: session-28.scope: Deactivated successfully. Nov 4 23:56:59.306216 systemd-logind[2523]: Session 28 logged out. Waiting for processes to exit. Nov 4 23:56:59.307231 systemd-logind[2523]: Removed session 28. Nov 4 23:56:59.414408 systemd[1]: Started sshd@26-10.200.8.39:22-10.200.16.10:40682.service - OpenSSH per-connection server daemon (10.200.16.10:40682). Nov 4 23:56:59.541754 containerd[2550]: time="2025-11-04T23:56:59.541710719Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:56:59.562903 containerd[2550]: time="2025-11-04T23:56:59.562834535Z" level=info msg="Container 0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:59.578875 containerd[2550]: time="2025-11-04T23:56:59.578849114Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430\"" Nov 4 23:56:59.579317 containerd[2550]: time="2025-11-04T23:56:59.579261489Z" level=info msg="StartContainer for \"0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430\"" Nov 4 23:56:59.580295 containerd[2550]: time="2025-11-04T23:56:59.580268244Z" level=info msg="connecting to shim 0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430" address="unix:///run/containerd/s/619682cd072219ea57fdc2a4df39bef8f52cf7056ac37d84b65ce609bf9f5c05" protocol=ttrpc version=3 Nov 4 23:56:59.603651 systemd[1]: Started cri-containerd-0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430.scope - libcontainer container 0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430. Nov 4 23:56:59.631928 systemd[1]: cri-containerd-0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430.scope: Deactivated successfully. Nov 4 23:56:59.633539 containerd[2550]: time="2025-11-04T23:56:59.633448350Z" level=info msg="received exit event container_id:\"0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430\" id:\"0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430\" pid:5848 exited_at:{seconds:1762300619 nanos:633309890}" Nov 4 23:56:59.633785 containerd[2550]: time="2025-11-04T23:56:59.633765714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430\" id:\"0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430\" pid:5848 exited_at:{seconds:1762300619 nanos:633309890}" Nov 4 23:56:59.634393 containerd[2550]: time="2025-11-04T23:56:59.634373460Z" level=info msg="StartContainer for \"0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430\" returns successfully" Nov 4 23:56:59.651050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d8f3af3a2f69f301f34a975fda2e82678a9c21c8aaa8e1665c3627d06ca8430-rootfs.mount: Deactivated successfully. Nov 4 23:57:00.049191 sshd[5832]: Accepted publickey for core from 10.200.16.10 port 40682 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:57:00.050331 sshd-session[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:00.054573 systemd-logind[2523]: New session 29 of user core. Nov 4 23:57:00.059644 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 4 23:57:00.547137 containerd[2550]: time="2025-11-04T23:57:00.547063497Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:57:00.579824 containerd[2550]: time="2025-11-04T23:57:00.579181692Z" level=info msg="Container a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:00.584054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777512212.mount: Deactivated successfully. Nov 4 23:57:00.596195 containerd[2550]: time="2025-11-04T23:57:00.596154765Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90\"" Nov 4 23:57:00.596630 containerd[2550]: time="2025-11-04T23:57:00.596605704Z" level=info msg="StartContainer for \"a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90\"" Nov 4 23:57:00.598256 containerd[2550]: time="2025-11-04T23:57:00.598228326Z" level=info msg="connecting to shim a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90" address="unix:///run/containerd/s/619682cd072219ea57fdc2a4df39bef8f52cf7056ac37d84b65ce609bf9f5c05" protocol=ttrpc version=3 Nov 4 23:57:00.620636 systemd[1]: Started cri-containerd-a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90.scope - libcontainer container a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90. Nov 4 23:57:00.649967 systemd[1]: cri-containerd-a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90.scope: Deactivated successfully. Nov 4 23:57:00.650305 containerd[2550]: time="2025-11-04T23:57:00.650251070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90\" id:\"a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90\" pid:5897 exited_at:{seconds:1762300620 nanos:649912302}" Nov 4 23:57:00.652718 containerd[2550]: time="2025-11-04T23:57:00.652125684Z" level=info msg="received exit event container_id:\"a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90\" id:\"a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90\" pid:5897 exited_at:{seconds:1762300620 nanos:649912302}" Nov 4 23:57:00.659811 containerd[2550]: time="2025-11-04T23:57:00.659788423Z" level=info msg="StartContainer for \"a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90\" returns successfully" Nov 4 23:57:00.672653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0f5272c5836f0ed030dd6f3df9e4574c3855e0e20ffcb5642e54130a396db90-rootfs.mount: Deactivated successfully. Nov 4 23:57:01.551512 containerd[2550]: time="2025-11-04T23:57:01.551429260Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:57:01.569868 containerd[2550]: time="2025-11-04T23:57:01.569837314Z" level=info msg="Container dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:01.585241 containerd[2550]: time="2025-11-04T23:57:01.585212969Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857\"" Nov 4 23:57:01.585922 containerd[2550]: time="2025-11-04T23:57:01.585728241Z" level=info msg="StartContainer for \"dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857\"" Nov 4 23:57:01.586812 containerd[2550]: time="2025-11-04T23:57:01.586785262Z" level=info msg="connecting to shim dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857" address="unix:///run/containerd/s/619682cd072219ea57fdc2a4df39bef8f52cf7056ac37d84b65ce609bf9f5c05" protocol=ttrpc version=3 Nov 4 23:57:01.608652 systemd[1]: Started cri-containerd-dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857.scope - libcontainer container dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857. Nov 4 23:57:01.629236 systemd[1]: cri-containerd-dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857.scope: Deactivated successfully. Nov 4 23:57:01.631323 containerd[2550]: time="2025-11-04T23:57:01.631252256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857\" id:\"dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857\" pid:5939 exited_at:{seconds:1762300621 nanos:630930003}" Nov 4 23:57:01.634233 containerd[2550]: time="2025-11-04T23:57:01.634138450Z" level=info msg="received exit event container_id:\"dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857\" id:\"dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857\" pid:5939 exited_at:{seconds:1762300621 nanos:630930003}" Nov 4 23:57:01.636163 containerd[2550]: time="2025-11-04T23:57:01.636097052Z" level=info msg="StartContainer for \"dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857\" returns successfully" Nov 4 23:57:01.651300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd6434e55d0e23d9e40ba57b1e79d9ef746c7a50e62d5d1a9dedac57febdc857-rootfs.mount: Deactivated successfully. Nov 4 23:57:02.557376 containerd[2550]: time="2025-11-04T23:57:02.557335108Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 23:57:02.580427 containerd[2550]: time="2025-11-04T23:57:02.579648289Z" level=info msg="Container 9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:02.581898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532778848.mount: Deactivated successfully. Nov 4 23:57:02.606990 containerd[2550]: time="2025-11-04T23:57:02.606960709Z" level=info msg="CreateContainer within sandbox \"7365fe9c8a95266409b966a5ca569856bad1db4ad34d0662f2eddee70390b016\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\"" Nov 4 23:57:02.607633 containerd[2550]: time="2025-11-04T23:57:02.607387947Z" level=info msg="StartContainer for \"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\"" Nov 4 23:57:02.609994 containerd[2550]: time="2025-11-04T23:57:02.609955090Z" level=info msg="connecting to shim 9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f" address="unix:///run/containerd/s/619682cd072219ea57fdc2a4df39bef8f52cf7056ac37d84b65ce609bf9f5c05" protocol=ttrpc version=3 Nov 4 23:57:02.633607 systemd[1]: Started cri-containerd-9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f.scope - libcontainer container 9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f. Nov 4 23:57:02.664052 containerd[2550]: time="2025-11-04T23:57:02.664030456Z" level=info msg="StartContainer for \"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" returns successfully" Nov 4 23:57:02.736163 containerd[2550]: time="2025-11-04T23:57:02.736021661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"02b2fd7872e0e7212011da0acee184bd3eb900684df0f285c6e8dc0f42c4391e\" pid:6005 exited_at:{seconds:1762300622 nanos:735810551}" Nov 4 23:57:03.095545 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Nov 4 23:57:03.582966 kubelet[3991]: I1104 23:57:03.582284 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5zfk4" podStartSLOduration=6.582270053 podStartE2EDuration="6.582270053s" podCreationTimestamp="2025-11-04 23:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:57:03.581910129 +0000 UTC m=+170.489000196" watchObservedRunningTime="2025-11-04 23:57:03.582270053 +0000 UTC m=+170.489360124" Nov 4 23:57:05.649310 systemd-networkd[2190]: lxc_health: Link UP Nov 4 23:57:05.652503 systemd-networkd[2190]: lxc_health: Gained carrier Nov 4 23:57:07.117625 systemd-networkd[2190]: lxc_health: Gained IPv6LL Nov 4 23:57:11.685925 containerd[2550]: time="2025-11-04T23:57:11.685877770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"fc0d513a44aa9011d2cacebe3be1cd34160c32ea5feb07d36213a937ec5090eb\" pid:6563 exited_at:{seconds:1762300631 nanos:685676887}" Nov 4 23:57:13.203605 containerd[2550]: time="2025-11-04T23:57:13.203562247Z" level=info msg="StopPodSandbox for \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\"" Nov 4 23:57:13.203987 containerd[2550]: time="2025-11-04T23:57:13.203702824Z" level=info msg="TearDown network for sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" successfully" Nov 4 23:57:13.203987 containerd[2550]: time="2025-11-04T23:57:13.203715355Z" level=info msg="StopPodSandbox for \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" returns successfully" Nov 4 23:57:13.204374 containerd[2550]: time="2025-11-04T23:57:13.204354250Z" level=info msg="RemovePodSandbox for \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\"" Nov 4 23:57:13.204453 containerd[2550]: time="2025-11-04T23:57:13.204380963Z" level=info msg="Forcibly stopping sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\"" Nov 4 23:57:13.204518 containerd[2550]: time="2025-11-04T23:57:13.204476483Z" level=info msg="TearDown network for sandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" successfully" Nov 4 23:57:13.205456 containerd[2550]: time="2025-11-04T23:57:13.205422609Z" level=info msg="Ensure that sandbox 1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f in task-service has been cleanup successfully" Nov 4 23:57:13.214811 containerd[2550]: time="2025-11-04T23:57:13.214780398Z" level=info msg="RemovePodSandbox \"1af96abe0040c9cbc51da3f30b516a2bbe669e33456884b92818ee4d36ba138f\" returns successfully" Nov 4 23:57:13.215143 containerd[2550]: time="2025-11-04T23:57:13.215116827Z" level=info msg="StopPodSandbox for \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\"" Nov 4 23:57:13.215270 containerd[2550]: time="2025-11-04T23:57:13.215253335Z" level=info msg="TearDown network for sandbox \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" successfully" Nov 4 23:57:13.215303 containerd[2550]: time="2025-11-04T23:57:13.215268986Z" level=info msg="StopPodSandbox for \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" returns successfully" Nov 4 23:57:13.215665 containerd[2550]: time="2025-11-04T23:57:13.215648094Z" level=info msg="RemovePodSandbox for \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\"" Nov 4 23:57:13.215722 containerd[2550]: time="2025-11-04T23:57:13.215672152Z" level=info msg="Forcibly stopping sandbox \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\"" Nov 4 23:57:13.215782 containerd[2550]: time="2025-11-04T23:57:13.215747453Z" level=info msg="TearDown network for sandbox \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" successfully" Nov 4 23:57:13.216632 containerd[2550]: time="2025-11-04T23:57:13.216612102Z" level=info msg="Ensure that sandbox efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa in task-service has been cleanup successfully" Nov 4 23:57:13.227858 containerd[2550]: time="2025-11-04T23:57:13.227829869Z" level=info msg="RemovePodSandbox \"efabf1386b7c6de9b93c6c209bc061e40f546e1bdffd102e321fa8b6f67a23fa\" returns successfully" Nov 4 23:57:13.766436 containerd[2550]: time="2025-11-04T23:57:13.766383107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"b2128818c4bb20186a8d438ef4e62da20efc9fddc9b04bd786cf474a3feeacc7\" pid:6588 exited_at:{seconds:1762300633 nanos:766077513}" Nov 4 23:57:15.849299 containerd[2550]: time="2025-11-04T23:57:15.849257350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"da788197657e078edb5d98888d8764c8a0895707492522f7956618f37ad7dc80\" pid:6614 exited_at:{seconds:1762300635 nanos:848908506}" Nov 4 23:57:15.851296 kubelet[3991]: E1104 23:57:15.851260 3991 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58656->127.0.0.1:35493: write tcp 127.0.0.1:58656->127.0.0.1:35493: write: broken pipe Nov 4 23:57:17.929513 containerd[2550]: time="2025-11-04T23:57:17.929452269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"c482ac02d18f065f43a068180c53679b1e6eb3c1e9055a7880a3d7bfd8f0e5cf\" pid:6638 exited_at:{seconds:1762300637 nanos:929085070}" Nov 4 23:57:20.016262 containerd[2550]: time="2025-11-04T23:57:20.016187982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"f79157711cd240c29d93e5f0227c5b94309ba5d92294c155b22df1c2d7924e24\" pid:6664 exited_at:{seconds:1762300640 nanos:15988272}" Nov 4 23:57:22.130534 containerd[2550]: time="2025-11-04T23:57:22.130474799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"196d8220878a1f7b36265390507f092be6757b1aacb159e6440b6d57ecfbb07d\" pid:6687 exited_at:{seconds:1762300642 nanos:129933093}" Nov 4 23:57:24.264416 containerd[2550]: time="2025-11-04T23:57:24.264356063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"e63e88917bc30dfbb097b4b8900ebb68d5d949174a2d616b5da3488d7f491d5d\" pid:6712 exited_at:{seconds:1762300644 nanos:263828804}" Nov 4 23:57:26.346685 containerd[2550]: time="2025-11-04T23:57:26.346634605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac172fa06bd9b8b459b91d219ba21849ae74709297d1037c35746028c38d32f\" id:\"1916f2c0091f1ce2b4d75f7ef7b74956adaa4641b323a065d489db6448803277\" pid:6734 exited_at:{seconds:1762300646 nanos:346204453}" Nov 4 23:57:26.463323 sshd[5878]: Connection closed by 10.200.16.10 port 40682 Nov 4 23:57:26.463881 sshd-session[5832]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:26.467101 systemd[1]: sshd@26-10.200.8.39:22-10.200.16.10:40682.service: Deactivated successfully. Nov 4 23:57:26.469065 systemd[1]: session-29.scope: Deactivated successfully. Nov 4 23:57:26.470876 systemd-logind[2523]: Session 29 logged out. Waiting for processes to exit. Nov 4 23:57:26.471909 systemd-logind[2523]: Removed session 29.