Nov 4 23:54:50.046798 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:54:50.046819 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:50.046830 kernel: BIOS-provided physical RAM map: Nov 4 23:54:50.046836 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 4 23:54:50.046842 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 4 23:54:50.046847 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 4 23:54:50.046853 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 4 23:54:50.046858 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 4 23:54:50.046866 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 4 23:54:50.046871 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 4 23:54:50.046876 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 4 23:54:50.046881 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 4 23:54:50.046885 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 4 23:54:50.046890 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 4 23:54:50.046899 kernel: NX (Execute Disable) protection: active Nov 4 23:54:50.046906 kernel: APIC: Static calls initialized Nov 4 23:54:50.046912 kernel: efi: EFI v2.7 by Microsoft Nov 4 23:54:50.046918 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Nov 4 23:54:50.046924 kernel: random: crng init done Nov 4 23:54:50.046931 kernel: secureboot: Secure boot disabled Nov 4 23:54:50.046937 kernel: SMBIOS 3.1.0 present. Nov 4 23:54:50.046943 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Nov 4 23:54:50.046949 kernel: DMI: Memory slots populated: 2/2 Nov 4 23:54:50.046956 kernel: Hypervisor detected: Microsoft Hyper-V Nov 4 23:54:50.046963 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 4 23:54:50.046969 kernel: Hyper-V: Nested features: 0x3e0101 Nov 4 23:54:50.046975 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 4 23:54:50.046981 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 4 23:54:50.046988 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 4 23:54:50.046994 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 4 23:54:50.047000 kernel: tsc: Detected 2299.999 MHz processor Nov 4 23:54:50.047007 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:54:50.047014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:54:50.047022 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 4 23:54:50.047029 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 4 23:54:50.047036 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:54:50.047043 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 4 23:54:50.047050 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 4 23:54:50.047056 kernel: Using GB pages for direct mapping Nov 4 23:54:50.047063 kernel: ACPI: Early table checksum verification disabled Nov 4 23:54:50.047074 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 4 23:54:50.047081 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:54:50.047097 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:54:50.047105 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 4 23:54:50.047111 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 4 23:54:50.047120 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:54:50.047127 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:54:50.047133 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:54:50.047140 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 4 23:54:50.047147 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 4 23:54:50.047154 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 4 23:54:50.047162 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 4 23:54:50.047169 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Nov 4 23:54:50.047176 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 4 23:54:50.047183 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 4 23:54:50.047190 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 4 23:54:50.047197 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 4 23:54:50.047203 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Nov 4 23:54:50.047211 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 4 23:54:50.047218 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 4 23:54:50.047225 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 4 23:54:50.047232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 4 23:54:50.047239 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 4 23:54:50.047246 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 4 23:54:50.047251 kernel: Zone ranges: Nov 4 23:54:50.047258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:54:50.047264 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 4 23:54:50.047269 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 4 23:54:50.047275 kernel: Device empty Nov 4 23:54:50.047280 kernel: Movable zone start for each node Nov 4 23:54:50.047286 kernel: Early memory node ranges Nov 4 23:54:50.047291 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 4 23:54:50.047296 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 4 23:54:50.047304 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 4 23:54:50.047309 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 4 23:54:50.047314 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 4 23:54:50.047320 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 4 23:54:50.047325 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:54:50.047331 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 4 23:54:50.047336 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 4 23:54:50.047343 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 4 23:54:50.047349 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 4 23:54:50.047354 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:54:50.047360 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:54:50.047365 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:54:50.047370 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 4 23:54:50.047376 kernel: TSC deadline timer available Nov 4 23:54:50.047383 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:54:50.047388 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:54:50.047394 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:54:50.047399 kernel: CPU topo: Max. threads per core: 2 Nov 4 23:54:50.047404 kernel: CPU topo: Num. cores per package: 1 Nov 4 23:54:50.047410 kernel: CPU topo: Num. threads per package: 2 Nov 4 23:54:50.047415 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 23:54:50.047422 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 4 23:54:50.047427 kernel: Booting paravirtualized kernel on Hyper-V Nov 4 23:54:50.047433 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:54:50.047439 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 23:54:50.047444 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 23:54:50.047450 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 23:54:50.047455 kernel: pcpu-alloc: [0] 0 1 Nov 4 23:54:50.047462 kernel: Hyper-V: PV spinlocks enabled Nov 4 23:54:50.047468 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 23:54:50.047474 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:50.047480 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 4 23:54:50.047486 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 23:54:50.047517 kernel: Fallback order for Node 0: 0 Nov 4 23:54:50.047525 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 4 23:54:50.047532 kernel: Policy zone: Normal Nov 4 23:54:50.047537 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:54:50.047543 kernel: software IO TLB: area num 2. Nov 4 23:54:50.047548 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 23:54:50.047554 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:54:50.047559 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:54:50.047565 kernel: Dynamic Preempt: voluntary Nov 4 23:54:50.047572 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:54:50.047578 kernel: rcu: RCU event tracing is enabled. Nov 4 23:54:50.047584 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 23:54:50.047595 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:54:50.047602 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:54:50.047608 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:54:50.047614 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:54:50.047620 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 23:54:50.047626 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:50.047634 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:50.047639 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:50.047645 kernel: Using NULL legacy PIC Nov 4 23:54:50.047651 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 4 23:54:50.047659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:54:50.047664 kernel: Console: colour dummy device 80x25 Nov 4 23:54:50.047670 kernel: printk: legacy console [tty1] enabled Nov 4 23:54:50.047676 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:54:50.047682 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 4 23:54:50.047688 kernel: ACPI: Core revision 20240827 Nov 4 23:54:50.047694 kernel: Failed to register legacy timer interrupt Nov 4 23:54:50.047701 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:54:50.047707 kernel: x2apic enabled Nov 4 23:54:50.047713 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:54:50.047719 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 4 23:54:50.047725 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 4 23:54:50.047730 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 4 23:54:50.047736 kernel: Hyper-V: Using IPI hypercalls Nov 4 23:54:50.047743 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 4 23:54:50.047749 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 4 23:54:50.047755 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 4 23:54:50.047761 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 4 23:54:50.047767 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 4 23:54:50.047773 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 4 23:54:50.047779 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 4 23:54:50.047786 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Nov 4 23:54:50.047792 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 23:54:50.047798 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 4 23:54:50.047803 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 4 23:54:50.047809 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:54:50.047814 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:54:50.047820 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:54:50.047826 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 4 23:54:50.047831 kernel: RETBleed: Vulnerable Nov 4 23:54:50.047838 kernel: Speculative Store Bypass: Vulnerable Nov 4 23:54:50.047844 kernel: active return thunk: its_return_thunk Nov 4 23:54:50.047849 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 23:54:50.047855 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:54:50.047860 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:54:50.047866 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:54:50.047871 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 4 23:54:50.047877 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 4 23:54:50.047883 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 4 23:54:50.047888 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 4 23:54:50.047895 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 4 23:54:50.047901 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 4 23:54:50.047906 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:54:50.047912 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 4 23:54:50.047917 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 4 23:54:50.047923 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 4 23:54:50.047929 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 4 23:54:50.047934 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 4 23:54:50.047940 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 4 23:54:50.047945 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 4 23:54:50.047952 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:54:50.047958 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:54:50.047963 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:54:50.047969 kernel: landlock: Up and running. Nov 4 23:54:50.047974 kernel: SELinux: Initializing. Nov 4 23:54:50.047980 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:54:50.047986 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:54:50.047991 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 4 23:54:50.047997 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 4 23:54:50.048003 kernel: signal: max sigframe size: 11952 Nov 4 23:54:50.048010 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:54:50.048016 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:54:50.048022 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:54:50.048028 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 23:54:50.048034 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:54:50.048040 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:54:50.048045 kernel: .... node #0, CPUs: #1 Nov 4 23:54:50.048051 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:54:50.048059 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 4 23:54:50.048065 kernel: Memory: 8099552K/8383228K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 277460K reserved, 0K cma-reserved) Nov 4 23:54:50.048071 kernel: devtmpfs: initialized Nov 4 23:54:50.048077 kernel: x86/mm: Memory block size: 128MB Nov 4 23:54:50.048083 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 4 23:54:50.048097 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:54:50.048103 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 23:54:50.048110 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:54:50.048116 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:54:50.048122 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:54:50.048128 kernel: audit: type=2000 audit(1762300484.058:1): state=initialized audit_enabled=0 res=1 Nov 4 23:54:50.048134 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:54:50.048140 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:54:50.048146 kernel: cpuidle: using governor menu Nov 4 23:54:50.048153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:54:50.048159 kernel: dca service started, version 1.12.1 Nov 4 23:54:50.048165 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 4 23:54:50.048171 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 4 23:54:50.048177 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:54:50.048183 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:54:50.048189 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:54:50.048196 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:54:50.048202 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:54:50.048208 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:54:50.048214 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:54:50.048220 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:54:50.048226 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:54:50.048231 kernel: ACPI: Interpreter enabled Nov 4 23:54:50.048238 kernel: ACPI: PM: (supports S0 S5) Nov 4 23:54:50.048244 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:54:50.048250 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:54:50.048256 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 4 23:54:50.048262 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 4 23:54:50.048268 kernel: iommu: Default domain type: Translated Nov 4 23:54:50.048273 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:54:50.048281 kernel: efivars: Registered efivars operations Nov 4 23:54:50.048287 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:54:50.048293 kernel: PCI: System does not support PCI Nov 4 23:54:50.048298 kernel: vgaarb: loaded Nov 4 23:54:50.048304 kernel: clocksource: Switched to clocksource tsc-early Nov 4 23:54:50.048310 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:54:50.048316 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:54:50.048323 kernel: pnp: PnP ACPI init Nov 4 23:54:50.048329 kernel: pnp: PnP ACPI: found 3 devices Nov 4 23:54:50.048335 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:54:50.048341 kernel: NET: Registered PF_INET protocol family Nov 4 23:54:50.048347 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:54:50.048353 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 4 23:54:50.048359 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:54:50.048366 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 23:54:50.048372 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 4 23:54:50.048378 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 4 23:54:50.048384 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 4 23:54:50.048389 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 4 23:54:50.048395 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:54:50.048401 kernel: NET: Registered PF_XDP protocol family Nov 4 23:54:50.048408 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:54:50.048414 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 4 23:54:50.048420 kernel: software IO TLB: mapped [mem 0x00000000366e4000-0x000000003a6e4000] (64MB) Nov 4 23:54:50.048426 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 4 23:54:50.048432 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 4 23:54:50.048438 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 4 23:54:50.048444 kernel: clocksource: Switched to clocksource tsc Nov 4 23:54:50.048451 kernel: Initialise system trusted keyrings Nov 4 23:54:50.048457 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 4 23:54:50.048463 kernel: Key type asymmetric registered Nov 4 23:54:50.048469 kernel: Asymmetric key parser 'x509' registered Nov 4 23:54:50.048474 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:54:50.048480 kernel: io scheduler mq-deadline registered Nov 4 23:54:50.048486 kernel: io scheduler kyber registered Nov 4 23:54:50.048493 kernel: io scheduler bfq registered Nov 4 23:54:50.048499 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:54:50.048505 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:54:50.048511 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:54:50.048517 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 4 23:54:50.048523 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:54:50.048529 kernel: i8042: PNP: No PS/2 controller found. Nov 4 23:54:50.048651 kernel: rtc_cmos 00:02: registered as rtc0 Nov 4 23:54:50.048727 kernel: rtc_cmos 00:02: setting system clock to 2025-11-04T23:54:46 UTC (1762300486) Nov 4 23:54:50.048798 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 4 23:54:50.048805 kernel: intel_pstate: Intel P-state driver initializing Nov 4 23:54:50.048811 kernel: efifb: probing for efifb Nov 4 23:54:50.048817 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 4 23:54:50.048825 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 4 23:54:50.048830 kernel: efifb: scrolling: redraw Nov 4 23:54:50.048836 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 4 23:54:50.048843 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 23:54:50.048848 kernel: fb0: EFI VGA frame buffer device Nov 4 23:54:50.048854 kernel: pstore: Using crash dump compression: deflate Nov 4 23:54:50.048860 kernel: pstore: Registered efi_pstore as persistent store backend Nov 4 23:54:50.048866 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:54:50.048873 kernel: Segment Routing with IPv6 Nov 4 23:54:50.048880 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:54:50.048885 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:54:50.048891 kernel: Key type dns_resolver registered Nov 4 23:54:50.048897 kernel: IPI shorthand broadcast: enabled Nov 4 23:54:50.048903 kernel: sched_clock: Marking stable (1550236705, 100669014)->(1991707851, -340802132) Nov 4 23:54:50.048909 kernel: registered taskstats version 1 Nov 4 23:54:50.048916 kernel: Loading compiled-in X.509 certificates Nov 4 23:54:50.048922 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:54:50.048928 kernel: Demotion targets for Node 0: null Nov 4 23:54:50.048934 kernel: Key type .fscrypt registered Nov 4 23:54:50.048940 kernel: Key type fscrypt-provisioning registered Nov 4 23:54:50.048946 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:54:50.048952 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:54:50.048959 kernel: ima: No architecture policies found Nov 4 23:54:50.048965 kernel: clk: Disabling unused clocks Nov 4 23:54:50.048971 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:54:50.048977 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:54:50.048983 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:54:50.048989 kernel: Run /init as init process Nov 4 23:54:50.048994 kernel: with arguments: Nov 4 23:54:50.049001 kernel: /init Nov 4 23:54:50.049007 kernel: with environment: Nov 4 23:54:50.049013 kernel: HOME=/ Nov 4 23:54:50.049019 kernel: TERM=linux Nov 4 23:54:50.049024 kernel: hv_vmbus: Vmbus version:5.3 Nov 4 23:54:50.049030 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 4 23:54:50.049036 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 4 23:54:50.049042 kernel: PTP clock support registered Nov 4 23:54:50.049049 kernel: hv_utils: Registering HyperV Utility Driver Nov 4 23:54:50.049055 kernel: hv_vmbus: registering driver hv_utils Nov 4 23:54:50.049061 kernel: hv_utils: Shutdown IC version 3.2 Nov 4 23:54:50.049067 kernel: hv_utils: TimeSync IC version 4.0 Nov 4 23:54:50.049073 kernel: hv_utils: Heartbeat IC version 3.0 Nov 4 23:54:50.049078 kernel: SCSI subsystem initialized Nov 4 23:54:50.049084 kernel: hv_vmbus: registering driver hv_pci Nov 4 23:54:50.049198 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 4 23:54:50.049278 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 4 23:54:50.049369 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 4 23:54:50.049448 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 4 23:54:50.049548 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 4 23:54:50.049637 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 4 23:54:50.049717 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 4 23:54:50.049801 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 4 23:54:50.049808 kernel: hv_vmbus: registering driver hv_storvsc Nov 4 23:54:50.049899 kernel: scsi host0: storvsc_host_t Nov 4 23:54:50.049994 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 4 23:54:50.050001 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 4 23:54:50.050008 kernel: hv_vmbus: registering driver hid_hyperv Nov 4 23:54:50.050014 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 4 23:54:50.050116 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 4 23:54:50.050128 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 4 23:54:50.050137 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 4 23:54:50.050215 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 4 23:54:50.050302 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 4 23:54:50.050367 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 4 23:54:50.050375 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 4 23:54:50.050460 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 4 23:54:50.050469 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:54:50.050552 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 4 23:54:50.050560 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:54:50.050566 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:54:50.050572 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:54:50.050578 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:54:50.050586 kernel: raid6: avx512x4 gen() 43284 MB/s Nov 4 23:54:50.050602 kernel: raid6: avx512x2 gen() 42955 MB/s Nov 4 23:54:50.050609 kernel: raid6: avx512x1 gen() 26107 MB/s Nov 4 23:54:50.050615 kernel: raid6: avx2x4 gen() 35595 MB/s Nov 4 23:54:50.050622 kernel: raid6: avx2x2 gen() 36914 MB/s Nov 4 23:54:50.050627 kernel: raid6: avx2x1 gen() 29926 MB/s Nov 4 23:54:50.050634 kernel: raid6: using algorithm avx512x4 gen() 43284 MB/s Nov 4 23:54:50.050640 kernel: raid6: .... xor() 7820 MB/s, rmw enabled Nov 4 23:54:50.050647 kernel: raid6: using avx512x2 recovery algorithm Nov 4 23:54:50.050653 kernel: xor: automatically using best checksumming function avx Nov 4 23:54:50.050659 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:54:50.050666 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (985) Nov 4 23:54:50.050672 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:54:50.050678 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:50.050685 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 23:54:50.050692 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:54:50.050698 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:54:50.050705 kernel: loop: module loaded Nov 4 23:54:50.050711 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:54:50.050717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:54:50.050724 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:54:50.050734 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:54:50.050742 systemd[1]: Detected virtualization microsoft. Nov 4 23:54:50.050748 systemd[1]: Detected architecture x86-64. Nov 4 23:54:50.050754 systemd[1]: Running in initrd. Nov 4 23:54:50.050761 systemd[1]: No hostname configured, using default hostname. Nov 4 23:54:50.050767 systemd[1]: Hostname set to . Nov 4 23:54:50.050775 systemd[1]: Initializing machine ID from random generator. Nov 4 23:54:50.050782 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:54:50.050788 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:54:50.050795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:54:50.050802 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:54:50.050809 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:54:50.050815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:54:50.050823 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:54:50.050830 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:54:50.050837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:54:50.050845 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:54:50.050852 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:54:50.050858 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:54:50.050865 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:54:50.050871 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:54:50.050878 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:54:50.050884 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:54:50.050893 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:54:50.050900 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:54:50.050906 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:54:50.050913 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:54:50.050919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:54:50.050926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:54:50.050934 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:54:50.050941 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:54:50.050947 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:54:50.050954 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:54:50.050961 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:54:50.050968 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:54:50.050974 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:54:50.050982 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:54:50.050989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:54:50.050996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:50.051003 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:54:50.051011 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:54:50.051017 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:54:50.051024 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:54:50.051041 systemd-journald[1121]: Collecting audit messages is disabled. Nov 4 23:54:50.051060 systemd-journald[1121]: Journal started Nov 4 23:54:50.051076 systemd-journald[1121]: Runtime Journal (/run/log/journal/670b3f7977a543bfb0a8ee310067b19c) is 8M, max 158.6M, 150.6M free. Nov 4 23:54:50.055933 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:54:50.059290 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:54:50.084228 systemd-tmpfiles[1133]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:54:50.086592 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:54:50.091163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:54:50.096714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:54:50.117110 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:54:50.122982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:54:50.141668 systemd-modules-load[1123]: Inserted module 'br_netfilter' Nov 4 23:54:50.146273 kernel: Bridge firewalling registered Nov 4 23:54:50.142292 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:54:50.146269 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:50.177504 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:50.180453 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:50.185683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:54:50.192198 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:54:50.216195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:54:50.225204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:54:50.304411 dracut-cmdline[1162]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:50.346583 systemd-resolved[1150]: Positive Trust Anchors: Nov 4 23:54:50.346595 systemd-resolved[1150]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:54:50.346598 systemd-resolved[1150]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:54:50.346636 systemd-resolved[1150]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:54:50.368540 systemd-resolved[1150]: Defaulting to hostname 'linux'. Nov 4 23:54:50.373474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:54:50.376168 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:54:50.501106 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:54:50.573109 kernel: iscsi: registered transport (tcp) Nov 4 23:54:50.621114 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:54:50.621161 kernel: QLogic iSCSI HBA Driver Nov 4 23:54:50.668291 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:54:50.679557 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:54:50.680423 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:54:50.717979 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:54:50.719757 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:54:50.722203 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:54:50.757956 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:54:50.766274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:54:50.792649 systemd-udevd[1419]: Using default interface naming scheme 'v257'. Nov 4 23:54:50.803813 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:54:50.806686 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:54:50.830468 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:54:50.835539 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:54:50.842510 dracut-pre-trigger[1475]: rd.md=0: removing MD RAID activation Nov 4 23:54:50.865846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:54:50.872213 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:54:50.893522 systemd-networkd[1507]: lo: Link UP Nov 4 23:54:50.894159 systemd-networkd[1507]: lo: Gained carrier Nov 4 23:54:50.894570 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:54:50.896815 systemd[1]: Reached target network.target - Network. Nov 4 23:54:50.922759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:54:50.928191 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:54:51.003706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:54:51.003895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:51.010285 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:51.013781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:51.030633 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#249 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 4 23:54:51.035887 kernel: hv_vmbus: registering driver hv_netvsc Nov 4 23:54:51.050599 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fcbf28 (unnamed net_device) (uninitialized): VF slot 1 added Nov 4 23:54:51.050817 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:54:51.061827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:51.077544 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:54:51.078755 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:54:51.080462 systemd-networkd[1507]: eth0: Link UP Nov 4 23:54:51.080556 systemd-networkd[1507]: eth0: Gained carrier Nov 4 23:54:51.080569 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:54:51.095139 systemd-networkd[1507]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 4 23:54:51.106173 kernel: AES CTR mode by8 optimization enabled Nov 4 23:54:51.243117 kernel: nvme nvme0: using unchecked data buffer Nov 4 23:54:51.329059 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 4 23:54:51.334288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:54:51.433652 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 4 23:54:51.458564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 4 23:54:51.472059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 4 23:54:51.556668 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:54:51.561752 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:54:51.563519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:54:51.563547 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:54:51.575448 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:54:51.608036 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:54:52.071109 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 4 23:54:52.075491 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 4 23:54:52.075737 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 4 23:54:52.077140 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 4 23:54:52.082204 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 4 23:54:52.086178 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 4 23:54:52.090274 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 4 23:54:52.092274 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 4 23:54:52.106173 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 4 23:54:52.106402 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 4 23:54:52.110265 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 4 23:54:52.128423 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 4 23:54:52.138103 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 4 23:54:52.142377 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fcbf28 eth0: VF registering: eth1 Nov 4 23:54:52.142568 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 4 23:54:52.146109 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 4 23:54:52.146344 systemd-networkd[1507]: eth1: Interface name change detected, renamed to enP30832s1. Nov 4 23:54:52.246119 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 4 23:54:52.250112 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 4 23:54:52.250381 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fcbf28 eth0: Data path switched to VF: enP30832s1 Nov 4 23:54:52.250674 systemd-networkd[1507]: enP30832s1: Link UP Nov 4 23:54:52.251890 systemd-networkd[1507]: enP30832s1: Gained carrier Nov 4 23:54:52.371288 systemd-networkd[1507]: eth0: Gained IPv6LL Nov 4 23:54:52.606452 disk-uuid[1682]: Warning: The kernel is still using the old partition table. Nov 4 23:54:52.606452 disk-uuid[1682]: The new table will be used at the next reboot or after you Nov 4 23:54:52.606452 disk-uuid[1682]: run partprobe(8) or kpartx(8) Nov 4 23:54:52.606452 disk-uuid[1682]: The operation has completed successfully. Nov 4 23:54:52.611980 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:54:52.612079 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:54:52.619241 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:54:52.667117 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1728) Nov 4 23:54:52.669548 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:52.669591 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:52.713445 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 23:54:52.713497 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 4 23:54:52.714532 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 23:54:52.721107 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:52.721834 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:54:52.727976 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:54:53.641732 ignition[1747]: Ignition 2.22.0 Nov 4 23:54:53.641746 ignition[1747]: Stage: fetch-offline Nov 4 23:54:53.644118 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:54:53.641860 ignition[1747]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:53.646205 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 23:54:53.641869 ignition[1747]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:54:53.641955 ignition[1747]: parsed url from cmdline: "" Nov 4 23:54:53.641958 ignition[1747]: no config URL provided Nov 4 23:54:53.641963 ignition[1747]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:54:53.641969 ignition[1747]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:54:53.641974 ignition[1747]: failed to fetch config: resource requires networking Nov 4 23:54:53.642254 ignition[1747]: Ignition finished successfully Nov 4 23:54:53.674797 ignition[1754]: Ignition 2.22.0 Nov 4 23:54:53.674808 ignition[1754]: Stage: fetch Nov 4 23:54:53.675026 ignition[1754]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:53.675033 ignition[1754]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:54:53.675929 ignition[1754]: parsed url from cmdline: "" Nov 4 23:54:53.675932 ignition[1754]: no config URL provided Nov 4 23:54:53.675936 ignition[1754]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:54:53.675940 ignition[1754]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:54:53.675956 ignition[1754]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 4 23:54:53.733406 ignition[1754]: GET result: OK Nov 4 23:54:53.733743 ignition[1754]: config has been read from IMDS userdata Nov 4 23:54:53.733769 ignition[1754]: parsing config with SHA512: 81547130415be8886288775178422fc0e0981536d9608d87c73a3ee6444f4d888a0756be78cc3d37725702e2a05f2fe9963084429313f8fbd381f5be051b8262 Nov 4 23:54:53.738078 unknown[1754]: fetched base config from "system" Nov 4 23:54:53.738084 unknown[1754]: fetched base config from "system" Nov 4 23:54:53.738103 unknown[1754]: fetched user config from "azure" Nov 4 23:54:53.740935 ignition[1754]: fetch: fetch complete Nov 4 23:54:53.743428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 23:54:53.740940 ignition[1754]: fetch: fetch passed Nov 4 23:54:53.747485 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:54:53.740990 ignition[1754]: Ignition finished successfully Nov 4 23:54:53.774277 ignition[1761]: Ignition 2.22.0 Nov 4 23:54:53.774287 ignition[1761]: Stage: kargs Nov 4 23:54:53.774516 ignition[1761]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:53.777742 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:54:53.774524 ignition[1761]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:54:53.782424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:54:53.775394 ignition[1761]: kargs: kargs passed Nov 4 23:54:53.775426 ignition[1761]: Ignition finished successfully Nov 4 23:54:53.806650 ignition[1768]: Ignition 2.22.0 Nov 4 23:54:53.806661 ignition[1768]: Stage: disks Nov 4 23:54:53.806861 ignition[1768]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:53.809431 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:54:53.806868 ignition[1768]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:54:53.813252 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:54:53.807976 ignition[1768]: disks: disks passed Nov 4 23:54:53.815669 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:54:53.808017 ignition[1768]: Ignition finished successfully Nov 4 23:54:53.819156 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:54:53.822230 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:54:53.823872 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:54:53.829248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:54:53.976754 systemd-fsck[1776]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Nov 4 23:54:53.981347 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:54:53.987768 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:54:56.186104 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:54:56.186541 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:54:56.190673 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:54:56.252285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:54:56.268071 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:54:56.274166 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 4 23:54:56.280378 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:54:56.288495 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1785) Nov 4 23:54:56.288523 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:56.288536 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:56.280458 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:54:56.294173 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 23:54:56.294220 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 4 23:54:56.294483 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:54:56.298457 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 23:54:56.299269 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:54:56.302750 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:54:56.857790 coreos-metadata[1787]: Nov 04 23:54:56.857 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 4 23:54:56.863236 coreos-metadata[1787]: Nov 04 23:54:56.860 INFO Fetch successful Nov 4 23:54:56.863236 coreos-metadata[1787]: Nov 04 23:54:56.860 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 4 23:54:56.868330 coreos-metadata[1787]: Nov 04 23:54:56.868 INFO Fetch successful Nov 4 23:54:56.881174 coreos-metadata[1787]: Nov 04 23:54:56.881 INFO wrote hostname ci-4487.0.0-n-fda2ba6bd5 to /sysroot/etc/hostname Nov 4 23:54:56.883045 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:54:57.105170 initrd-setup-root[1815]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:54:57.147825 initrd-setup-root[1822]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:54:57.177607 initrd-setup-root[1829]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:54:57.182465 initrd-setup-root[1836]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:54:58.499255 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:54:58.503206 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:54:58.507253 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:54:58.539361 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:54:58.545146 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:58.556254 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:54:58.573806 ignition[1906]: INFO : Ignition 2.22.0 Nov 4 23:54:58.573806 ignition[1906]: INFO : Stage: mount Nov 4 23:54:58.579167 ignition[1906]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:58.579167 ignition[1906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:54:58.579167 ignition[1906]: INFO : mount: mount passed Nov 4 23:54:58.579167 ignition[1906]: INFO : Ignition finished successfully Nov 4 23:54:58.577517 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:54:58.583556 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:54:58.610381 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:54:58.629110 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1916) Nov 4 23:54:58.629149 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:58.631171 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:58.637621 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 23:54:58.637658 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 4 23:54:58.637671 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 23:54:58.640070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:54:58.667626 ignition[1933]: INFO : Ignition 2.22.0 Nov 4 23:54:58.667626 ignition[1933]: INFO : Stage: files Nov 4 23:54:58.671007 ignition[1933]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:58.671007 ignition[1933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:54:58.671007 ignition[1933]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:54:58.681623 ignition[1933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:54:58.681623 ignition[1933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:54:58.784067 ignition[1933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:54:58.786509 ignition[1933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:54:58.790177 ignition[1933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:54:58.789195 unknown[1933]: wrote ssh authorized keys file for user: core Nov 4 23:54:58.828278 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:54:58.828278 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:54:58.876059 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:54:58.912131 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:54:58.915222 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:54:58.915222 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 4 23:54:59.085054 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:54:59.598777 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:54:59.631052 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:54:59.631052 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:54:59.631052 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:54:59.641137 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:54:59.641137 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:54:59.641137 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 4 23:54:59.924625 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 23:55:00.513380 ignition[1933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:55:00.513380 ignition[1933]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 23:55:00.541330 ignition[1933]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:55:00.548703 ignition[1933]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:55:00.548703 ignition[1933]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 23:55:00.567844 ignition[1933]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:55:00.567844 ignition[1933]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:55:00.567844 ignition[1933]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:55:00.567844 ignition[1933]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:55:00.567844 ignition[1933]: INFO : files: files passed Nov 4 23:55:00.567844 ignition[1933]: INFO : Ignition finished successfully Nov 4 23:55:00.553311 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:55:00.561128 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:55:00.564213 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:55:00.579188 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:55:00.579275 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:55:00.601284 initrd-setup-root-after-ignition[1965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:00.604919 initrd-setup-root-after-ignition[1969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:00.607236 initrd-setup-root-after-ignition[1965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:00.609725 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:55:00.615558 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:55:00.619402 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:55:00.653634 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:55:00.653727 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:55:00.658364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:55:00.663170 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:55:00.668740 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:55:00.669364 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:55:00.687437 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:55:00.692192 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:55:00.718867 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:55:00.719056 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:00.720237 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:00.726753 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:55:00.728163 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:55:00.728288 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:55:00.732538 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:55:00.736262 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:55:00.740661 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:55:00.745040 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:55:00.745856 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:55:00.753228 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:55:00.754834 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:55:00.759500 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:55:00.759952 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:55:00.766261 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:55:00.770240 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:55:00.774209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:55:00.774355 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:55:00.778520 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:00.778907 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:00.779244 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:55:00.780364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:00.786695 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:55:00.786822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:55:00.808368 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:55:00.808527 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:55:00.813242 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:55:00.813336 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:55:00.816324 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 4 23:55:00.816447 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:55:00.822999 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:55:00.829740 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:55:00.829876 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:00.838215 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:55:00.840902 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:55:00.841076 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:00.848302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:55:00.863155 ignition[1989]: INFO : Ignition 2.22.0 Nov 4 23:55:00.863155 ignition[1989]: INFO : Stage: umount Nov 4 23:55:00.863155 ignition[1989]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:00.863155 ignition[1989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 4 23:55:00.863155 ignition[1989]: INFO : umount: umount passed Nov 4 23:55:00.863155 ignition[1989]: INFO : Ignition finished successfully Nov 4 23:55:00.848461 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:00.853968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:55:00.854077 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:55:00.879998 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:55:00.880124 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:55:00.888390 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:55:00.890459 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:55:00.894751 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:55:00.896646 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:55:00.899967 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:55:00.900021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:55:00.904164 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 23:55:00.904205 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 23:55:00.908171 systemd[1]: Stopped target network.target - Network. Nov 4 23:55:00.912138 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:55:00.912187 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:55:00.915134 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:55:00.918148 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:55:00.921085 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:00.924133 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:55:00.928143 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:55:00.928412 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:55:00.928450 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:55:00.928727 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:55:00.928753 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:55:00.928791 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:55:00.928835 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:55:00.929066 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:55:00.929115 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:55:00.929466 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:55:00.929741 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:55:00.940630 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:55:00.940722 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:55:00.945147 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:55:00.945230 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:55:00.952985 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:55:00.956605 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:55:00.957807 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:00.961613 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:55:00.961709 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:55:00.961763 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:55:00.962060 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:55:00.962106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:00.962394 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:55:00.962422 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:00.962468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:00.967910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:55:00.979795 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:55:00.979920 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:55:00.982638 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:55:00.982677 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:55:00.985717 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:55:00.985863 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:00.997372 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:55:00.997411 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:01.002157 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:55:01.002183 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:01.070244 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fcbf28 eth0: Data path switched from VF: enP30832s1 Nov 4 23:55:01.070436 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 4 23:55:01.002374 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:55:01.002408 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:55:01.002740 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:55:01.002773 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:55:01.003077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:55:01.003122 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:55:01.005203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:55:01.005311 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:55:01.005361 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:01.005412 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:55:01.005439 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:01.005971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:01.006002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:01.025802 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:55:01.025894 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:55:01.072772 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:55:01.072851 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:55:01.076750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:55:01.080321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:55:01.113027 systemd[1]: Switching root. Nov 4 23:55:01.214988 systemd-journald[1121]: Journal stopped Nov 4 23:55:09.253544 systemd-journald[1121]: Received SIGTERM from PID 1 (systemd). Nov 4 23:55:09.253578 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:55:09.253595 kernel: SELinux: policy capability open_perms=1 Nov 4 23:55:09.253605 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:55:09.253614 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:55:09.253623 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:55:09.253633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:55:09.253645 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:55:09.253659 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:55:09.253671 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:55:09.253682 kernel: audit: type=1403 audit(1762300502.926:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:55:09.253694 systemd[1]: Successfully loaded SELinux policy in 239.068ms. Nov 4 23:55:09.253708 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.532ms. Nov 4 23:55:09.253726 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:55:09.253739 systemd[1]: Detected virtualization microsoft. Nov 4 23:55:09.253752 systemd[1]: Detected architecture x86-64. Nov 4 23:55:09.253763 systemd[1]: Detected first boot. Nov 4 23:55:09.253781 systemd[1]: Hostname set to . Nov 4 23:55:09.253795 systemd[1]: Initializing machine ID from random generator. Nov 4 23:55:09.253809 zram_generator::config[2031]: No configuration found. Nov 4 23:55:09.253823 kernel: Guest personality initialized and is inactive Nov 4 23:55:09.253836 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Nov 4 23:55:09.253850 kernel: Initialized host personality Nov 4 23:55:09.253864 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:55:09.253876 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:55:09.253890 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:55:09.253904 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:55:09.253919 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:55:09.253933 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:55:09.253950 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:55:09.253962 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:55:09.253973 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:55:09.253989 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:55:09.254002 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:55:09.254013 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:55:09.254029 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:55:09.254042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:09.254058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:09.254073 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:55:09.254124 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:55:09.254147 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:55:09.254164 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:55:09.254176 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:55:09.254187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:09.254198 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:09.254215 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:55:09.254225 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:55:09.254236 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:55:09.254246 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:55:09.254257 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:09.254267 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:55:09.254277 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:55:09.254288 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:55:09.254298 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:55:09.254310 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:55:09.254321 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:55:09.254332 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:09.254343 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:09.254356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:09.254367 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:55:09.254378 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:55:09.254388 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:55:09.254399 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:55:09.254410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:09.254422 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:55:09.254433 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:55:09.254444 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:55:09.254455 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:55:09.254466 systemd[1]: Reached target machines.target - Containers. Nov 4 23:55:09.254477 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:55:09.254487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:09.254499 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:55:09.254509 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:55:09.254520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:55:09.254531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:55:09.254542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:55:09.254552 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:55:09.254565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:55:09.254577 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:55:09.254588 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:55:09.254599 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:55:09.254609 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:55:09.254620 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:55:09.254631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:09.254644 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:55:09.254655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:55:09.254666 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:55:09.254677 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:55:09.254688 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:55:09.254699 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:55:09.254710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:09.254723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:55:09.254733 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:55:09.254744 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:55:09.254754 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:55:09.254765 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:55:09.254775 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:55:09.254786 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:55:09.254799 kernel: fuse: init (API version 7.41) Nov 4 23:55:09.254825 systemd-journald[2117]: Collecting audit messages is disabled. Nov 4 23:55:09.254848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:09.254861 systemd-journald[2117]: Journal started Nov 4 23:55:09.254885 systemd-journald[2117]: Runtime Journal (/run/log/journal/0044222df57a4c0194d2f29874026675) is 8M, max 158.6M, 150.6M free. Nov 4 23:55:08.716978 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:55:08.724631 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 4 23:55:08.725080 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:55:09.264112 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:55:09.261808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:55:09.261996 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:55:09.265568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:55:09.265730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:55:09.267823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:55:09.267968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:55:09.269947 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:55:09.270115 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:55:09.272062 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:55:09.272231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:55:09.274145 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:09.278031 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:55:09.285239 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:55:09.287329 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:55:09.289896 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:55:09.296193 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:55:09.298387 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:55:09.299188 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:55:09.303348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:55:09.306286 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:09.312218 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:55:09.315287 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:55:09.317380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:55:09.319077 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:55:09.323318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:55:09.325238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:55:09.330205 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:55:09.339184 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:55:09.343009 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:55:09.345790 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:55:09.371725 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:09.378912 systemd-journald[2117]: Time spent on flushing to /var/log/journal/0044222df57a4c0194d2f29874026675 is 19.587ms for 968 entries. Nov 4 23:55:09.378912 systemd-journald[2117]: System Journal (/var/log/journal/0044222df57a4c0194d2f29874026675) is 8M, max 2.2G, 2.2G free. Nov 4 23:55:09.458016 systemd-journald[2117]: Received client request to flush runtime journal. Nov 4 23:55:09.458064 kernel: ACPI: bus type drm_connector registered Nov 4 23:55:09.458082 kernel: loop1: detected capacity change from 0 to 27752 Nov 4 23:55:09.397148 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:55:09.397297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:55:09.421713 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:55:09.423511 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:55:09.428498 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:55:09.431496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:09.441015 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:55:09.459278 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:55:09.547390 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:55:09.580360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:09.726522 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:55:09.957559 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:55:09.964528 kernel: loop2: detected capacity change from 0 to 219144 Nov 4 23:55:09.964219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:55:09.968231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:55:10.035118 kernel: loop3: detected capacity change from 0 to 128048 Nov 4 23:55:10.073249 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:55:10.109367 systemd-tmpfiles[2189]: ACLs are not supported, ignoring. Nov 4 23:55:10.109386 systemd-tmpfiles[2189]: ACLs are not supported, ignoring. Nov 4 23:55:10.112207 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:10.122655 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:55:10.135452 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:55:10.138554 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:10.169872 systemd-udevd[2200]: Using default interface naming scheme 'v257'. Nov 4 23:55:10.222973 systemd-resolved[2188]: Positive Trust Anchors: Nov 4 23:55:10.222992 systemd-resolved[2188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:55:10.222996 systemd-resolved[2188]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:55:10.223030 systemd-resolved[2188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:55:10.389938 systemd-resolved[2188]: Using system hostname 'ci-4487.0.0-n-fda2ba6bd5'. Nov 4 23:55:10.391025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:55:10.393144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:10.551112 kernel: loop4: detected capacity change from 0 to 110984 Nov 4 23:55:10.778369 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:10.783248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:55:10.864332 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:55:10.896109 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:55:10.912235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 4 23:55:10.944225 kernel: hv_vmbus: registering driver hv_balloon Nov 4 23:55:10.961110 kernel: hv_vmbus: registering driver hyperv_fb Nov 4 23:55:10.961825 systemd-networkd[2207]: lo: Link UP Nov 4 23:55:10.961836 systemd-networkd[2207]: lo: Gained carrier Nov 4 23:55:10.964254 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:55:10.970454 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 4 23:55:10.965701 systemd-networkd[2207]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:10.965705 systemd-networkd[2207]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:10.966524 systemd[1]: Reached target network.target - Network. Nov 4 23:55:10.971325 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:55:10.974359 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 4 23:55:10.977356 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:55:10.980157 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 4 23:55:10.980248 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fcbf28 eth0: Data path switched to VF: enP30832s1 Nov 4 23:55:10.985693 systemd-networkd[2207]: enP30832s1: Link UP Nov 4 23:55:10.985850 systemd-networkd[2207]: eth0: Link UP Nov 4 23:55:10.985858 systemd-networkd[2207]: eth0: Gained carrier Nov 4 23:55:10.985874 systemd-networkd[2207]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:10.990800 systemd-networkd[2207]: enP30832s1: Gained carrier Nov 4 23:55:10.999379 systemd-networkd[2207]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 4 23:55:11.008681 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 4 23:55:11.008809 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 4 23:55:11.011414 kernel: Console: switching to colour dummy device 80x25 Nov 4 23:55:11.014115 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 23:55:11.062508 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:55:11.087589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:11.095107 kernel: loop5: detected capacity change from 0 to 27752 Nov 4 23:55:11.097302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:11.097473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:11.104288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:11.121147 kernel: loop6: detected capacity change from 0 to 219144 Nov 4 23:55:11.138464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:11.139011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:11.144334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:11.162957 kernel: loop7: detected capacity change from 0 to 128048 Nov 4 23:55:11.181104 kernel: loop1: detected capacity change from 0 to 110984 Nov 4 23:55:11.203601 (sd-merge)[2268]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Nov 4 23:55:11.215397 (sd-merge)[2268]: Merged extensions into '/usr'. Nov 4 23:55:11.250072 systemd[1]: Reload requested from client PID 2168 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:55:11.250330 systemd[1]: Reloading... Nov 4 23:55:11.281125 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 4 23:55:11.305144 zram_generator::config[2314]: No configuration found. Nov 4 23:55:11.530491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 4 23:55:11.531852 systemd[1]: Reloading finished in 281 ms. Nov 4 23:55:11.551452 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:55:11.570890 systemd[1]: Starting ensure-sysext.service... Nov 4 23:55:11.575214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:55:11.583936 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:55:11.595933 systemd[1]: Reload requested from client PID 2376 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:55:11.596002 systemd[1]: Reloading... Nov 4 23:55:11.604535 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:55:11.604796 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:55:11.605410 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:55:11.605708 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:55:11.606807 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:55:11.607385 systemd-tmpfiles[2378]: ACLs are not supported, ignoring. Nov 4 23:55:11.607510 systemd-tmpfiles[2378]: ACLs are not supported, ignoring. Nov 4 23:55:11.655120 zram_generator::config[2409]: No configuration found. Nov 4 23:55:11.691730 systemd-tmpfiles[2378]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:55:11.691740 systemd-tmpfiles[2378]: Skipping /boot Nov 4 23:55:11.698231 systemd-tmpfiles[2378]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:55:11.698325 systemd-tmpfiles[2378]: Skipping /boot Nov 4 23:55:11.843756 systemd[1]: Reloading finished in 247 ms. Nov 4 23:55:11.872040 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:55:11.875021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:11.883167 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:55:11.887310 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:55:11.892220 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:55:11.897883 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:55:11.901315 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:55:11.904170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:11.904321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:11.906358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:55:11.908607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:55:11.912973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:55:11.913579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:11.913681 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:11.913771 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:11.918013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:11.918238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:11.918377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:11.918455 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:11.918534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:11.924309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:55:11.924463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:55:11.926149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:11.926504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:11.930355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:55:11.930933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:11.931036 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:11.931205 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:55:11.931303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:11.932654 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:55:11.932852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:55:11.934431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:55:11.937551 systemd[1]: Finished ensure-sysext.service. Nov 4 23:55:11.941294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:55:11.941456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:55:11.942083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:55:11.946477 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:55:11.957253 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:55:11.957441 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:55:12.211130 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:55:12.330799 augenrules[2510]: No rules Nov 4 23:55:12.332034 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:55:12.332314 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:55:12.506718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:12.531229 systemd-networkd[2207]: eth0: Gained IPv6LL Nov 4 23:55:12.533204 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:55:12.535316 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:55:14.411222 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:55:14.413626 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:55:20.124115 ldconfig[2476]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:55:20.135084 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:55:20.139457 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:55:20.169580 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:55:20.171435 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:55:20.174246 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:55:20.175867 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:55:20.179149 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:55:20.182271 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:55:20.185209 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:55:20.188168 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:55:20.191140 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:55:20.191177 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:55:20.194148 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:55:20.211386 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:55:20.213938 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:55:20.233178 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:55:20.234822 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:55:20.236332 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:55:20.249594 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:55:20.266507 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:55:20.269706 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:55:20.272926 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:55:20.275140 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:55:20.278194 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:55:20.278225 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:55:20.353561 systemd[1]: Starting chronyd.service - NTP client/server... Nov 4 23:55:20.359159 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:55:20.364288 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 23:55:20.369617 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:55:20.374250 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:55:20.379179 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:55:20.385284 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:55:20.389413 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:55:20.395219 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:55:20.397157 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 4 23:55:20.399221 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 4 23:55:20.401260 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 4 23:55:20.404565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:20.413267 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:55:20.415981 jq[2531]: false Nov 4 23:55:20.416557 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:55:20.421279 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:55:20.425253 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:55:20.431261 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:55:20.434446 KVP[2537]: KVP starting; pid is:2537 Nov 4 23:55:20.440390 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:55:20.442472 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:55:20.448751 KVP[2537]: KVP LIC Version: 3.1 Nov 4 23:55:20.442884 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:55:20.445273 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:55:20.452630 kernel: hv_utils: KVP IC version 4.0 Nov 4 23:55:20.450959 oslogin_cache_refresh[2533]: Refreshing passwd entry cache Nov 4 23:55:20.452859 google_oslogin_nss_cache[2533]: oslogin_cache_refresh[2533]: Refreshing passwd entry cache Nov 4 23:55:20.453333 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:55:20.459156 extend-filesystems[2532]: Found /dev/nvme0n1p6 Nov 4 23:55:20.467044 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:55:20.473819 google_oslogin_nss_cache[2533]: oslogin_cache_refresh[2533]: Failure getting users, quitting Nov 4 23:55:20.473819 google_oslogin_nss_cache[2533]: oslogin_cache_refresh[2533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:55:20.473819 google_oslogin_nss_cache[2533]: oslogin_cache_refresh[2533]: Refreshing group entry cache Nov 4 23:55:20.472680 oslogin_cache_refresh[2533]: Failure getting users, quitting Nov 4 23:55:20.472696 oslogin_cache_refresh[2533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:55:20.472737 oslogin_cache_refresh[2533]: Refreshing group entry cache Nov 4 23:55:20.473641 chronyd[2526]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 4 23:55:20.478729 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:55:20.479069 jq[2549]: true Nov 4 23:55:20.479382 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:55:20.481272 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:55:20.481595 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:55:20.494723 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:55:20.496318 google_oslogin_nss_cache[2533]: oslogin_cache_refresh[2533]: Failure getting groups, quitting Nov 4 23:55:20.496318 google_oslogin_nss_cache[2533]: oslogin_cache_refresh[2533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:55:20.493394 oslogin_cache_refresh[2533]: Failure getting groups, quitting Nov 4 23:55:20.494912 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:55:20.493403 oslogin_cache_refresh[2533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:55:20.501964 extend-filesystems[2532]: Found /dev/nvme0n1p9 Nov 4 23:55:20.505420 extend-filesystems[2532]: Checking size of /dev/nvme0n1p9 Nov 4 23:55:20.517385 (ntainerd)[2578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:55:20.520882 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:55:20.521071 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:55:20.526865 jq[2560]: true Nov 4 23:55:20.539393 update_engine[2548]: I20251104 23:55:20.539259 2548 main.cc:92] Flatcar Update Engine starting Nov 4 23:55:20.541214 chronyd[2526]: Timezone right/UTC failed leap second check, ignoring Nov 4 23:55:20.541469 systemd[1]: Started chronyd.service - NTP client/server. Nov 4 23:55:20.541365 chronyd[2526]: Loaded seccomp filter (level 2) Nov 4 23:55:20.544109 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:55:20.560737 extend-filesystems[2532]: Resized partition /dev/nvme0n1p9 Nov 4 23:55:20.585444 tar[2559]: linux-amd64/LICENSE Nov 4 23:55:20.585634 tar[2559]: linux-amd64/helm Nov 4 23:55:20.594765 extend-filesystems[2607]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:55:20.656658 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Nov 4 23:55:20.689694 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Nov 4 23:55:20.666321 systemd-logind[2547]: New seat seat0. Nov 4 23:55:20.691732 systemd-logind[2547]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 4 23:55:20.691980 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:55:20.703317 extend-filesystems[2607]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 4 23:55:20.703317 extend-filesystems[2607]: old_desc_blocks = 4, new_desc_blocks = 4 Nov 4 23:55:20.703317 extend-filesystems[2607]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Nov 4 23:55:20.715231 extend-filesystems[2532]: Resized filesystem in /dev/nvme0n1p9 Nov 4 23:55:20.714936 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:55:20.715703 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:55:20.717394 bash[2610]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:55:20.721251 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:55:20.727429 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 23:55:20.777624 sshd_keygen[2589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:55:20.816491 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:55:20.820375 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:55:20.824244 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 4 23:55:20.851683 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:55:20.851866 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:55:20.859833 dbus-daemon[2529]: [system] SELinux support is enabled Nov 4 23:55:20.864357 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:55:20.868434 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:55:20.873783 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:55:20.873807 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:55:20.876164 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:55:20.876187 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:55:20.891744 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:55:20.894428 update_engine[2548]: I20251104 23:55:20.894380 2548 update_check_scheduler.cc:74] Next update check in 4m3s Nov 4 23:55:20.894738 dbus-daemon[2529]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 4 23:55:20.900973 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 4 23:55:20.917249 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:55:20.920868 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:55:20.928915 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:55:20.940371 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:55:20.943350 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:55:20.971959 coreos-metadata[2528]: Nov 04 23:55:20.971 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 4 23:55:20.977816 coreos-metadata[2528]: Nov 04 23:55:20.977 INFO Fetch successful Nov 4 23:55:20.977816 coreos-metadata[2528]: Nov 04 23:55:20.977 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 4 23:55:20.983069 coreos-metadata[2528]: Nov 04 23:55:20.981 INFO Fetch successful Nov 4 23:55:20.983069 coreos-metadata[2528]: Nov 04 23:55:20.983 INFO Fetching http://168.63.129.16/machine/ebe5b80c-cfec-4c30-bf26-d0be6be7d2ff/c9c6f7c6%2D8e1e%2D4492%2Da6d2%2Dddc86a27c684.%5Fci%2D4487.0.0%2Dn%2Dfda2ba6bd5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 4 23:55:20.985171 coreos-metadata[2528]: Nov 04 23:55:20.985 INFO Fetch successful Nov 4 23:55:20.985389 coreos-metadata[2528]: Nov 04 23:55:20.985 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 4 23:55:20.993590 coreos-metadata[2528]: Nov 04 23:55:20.993 INFO Fetch successful Nov 4 23:55:21.026521 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 23:55:21.029554 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:55:21.210070 locksmithd[2667]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:55:21.268162 tar[2559]: linux-amd64/README.md Nov 4 23:55:21.283265 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:55:21.649266 containerd[2578]: time="2025-11-04T23:55:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:55:21.651323 containerd[2578]: time="2025-11-04T23:55:21.649634727Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658383587Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.702µs" Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658415152Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658432745Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658556979Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658569812Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658596691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658644113Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658653945Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658845903Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658858890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:55:21.658860 containerd[2578]: time="2025-11-04T23:55:21.658867874Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:55:21.659197 containerd[2578]: time="2025-11-04T23:55:21.658875137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:55:21.659197 containerd[2578]: time="2025-11-04T23:55:21.658925446Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:55:21.659197 containerd[2578]: time="2025-11-04T23:55:21.659115187Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:55:21.659197 containerd[2578]: time="2025-11-04T23:55:21.659139402Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:55:21.659197 containerd[2578]: time="2025-11-04T23:55:21.659150725Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:55:21.659197 containerd[2578]: time="2025-11-04T23:55:21.659189733Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:55:21.659426 containerd[2578]: time="2025-11-04T23:55:21.659403130Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:55:21.659492 containerd[2578]: time="2025-11-04T23:55:21.659457839Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673470533Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673534992Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673551712Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673563868Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673577070Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673588902Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673601832Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673616893Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673635983Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673647063Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673656714Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673669661Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673776189Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:55:21.674510 containerd[2578]: time="2025-11-04T23:55:21.673791490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673805808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673821608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673833199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673843740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673854015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673863991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673874466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673884598Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673893706Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673949426Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673961655Z" level=info msg="Start snapshots syncer" Nov 4 23:55:21.674834 containerd[2578]: time="2025-11-04T23:55:21.673978192Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:55:21.675064 containerd[2578]: time="2025-11-04T23:55:21.674217863Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:55:21.675064 containerd[2578]: time="2025-11-04T23:55:21.674264580Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674314503Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674394928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674412430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674422982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674434819Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674446499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674457157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674468121Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:55:21.675348 containerd[2578]: time="2025-11-04T23:55:21.674490615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.675974694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676006491Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676076536Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676111247Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676121310Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676130563Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676139029Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676148736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676158713Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676186131Z" level=info msg="runtime interface created" Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676191673Z" level=info msg="created NRI interface" Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676199811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676212218Z" level=info msg="Connect containerd service" Nov 4 23:55:21.677035 containerd[2578]: time="2025-11-04T23:55:21.676241767Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:55:21.677382 containerd[2578]: time="2025-11-04T23:55:21.677143063Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:55:21.817204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:21.827597 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:22.076055 containerd[2578]: time="2025-11-04T23:55:22.075898998Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:55:22.076055 containerd[2578]: time="2025-11-04T23:55:22.075952653Z" level=info msg="Start subscribing containerd event" Nov 4 23:55:22.076190 containerd[2578]: time="2025-11-04T23:55:22.076063640Z" level=info msg="Start recovering state" Nov 4 23:55:22.076190 containerd[2578]: time="2025-11-04T23:55:22.076174157Z" level=info msg="Start event monitor" Nov 4 23:55:22.076190 containerd[2578]: time="2025-11-04T23:55:22.076188521Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:55:22.076247 containerd[2578]: time="2025-11-04T23:55:22.076196041Z" level=info msg="Start streaming server" Nov 4 23:55:22.076247 containerd[2578]: time="2025-11-04T23:55:22.076210683Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:55:22.076247 containerd[2578]: time="2025-11-04T23:55:22.076218925Z" level=info msg="runtime interface starting up..." Nov 4 23:55:22.076247 containerd[2578]: time="2025-11-04T23:55:22.076225238Z" level=info msg="starting plugins..." Nov 4 23:55:22.076247 containerd[2578]: time="2025-11-04T23:55:22.076237000Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:55:22.076434 containerd[2578]: time="2025-11-04T23:55:22.076415989Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:55:22.076584 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:55:22.079484 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:55:22.080080 containerd[2578]: time="2025-11-04T23:55:22.079905787Z" level=info msg="containerd successfully booted in 0.431501s" Nov 4 23:55:22.081658 systemd[1]: Startup finished in 4.337s (kernel) + 13.695s (initrd) + 19.392s (userspace) = 37.425s. Nov 4 23:55:22.323211 kubelet[2705]: E1104 23:55:22.323167 2705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:22.325991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:22.326257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:22.326679 systemd[1]: kubelet.service: Consumed 854ms CPU time, 257.8M memory peak. Nov 4 23:55:22.589812 login[2671]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 4 23:55:22.603085 login[2670]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 23:55:22.609278 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:55:22.611345 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:55:22.617559 systemd-logind[2547]: New session 1 of user core. Nov 4 23:55:22.641860 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:55:22.644393 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:55:22.671645 (systemd)[2722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:55:22.675293 systemd-logind[2547]: New session c1 of user core. Nov 4 23:55:22.960390 systemd[2722]: Queued start job for default target default.target. Nov 4 23:55:22.966972 systemd[2722]: Created slice app.slice - User Application Slice. Nov 4 23:55:22.967428 systemd[2722]: Reached target paths.target - Paths. Nov 4 23:55:22.967601 systemd[2722]: Reached target timers.target - Timers. Nov 4 23:55:22.968753 systemd[2722]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:55:22.979060 systemd[2722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:55:22.979557 systemd[2722]: Reached target sockets.target - Sockets. Nov 4 23:55:22.979752 systemd[2722]: Reached target basic.target - Basic System. Nov 4 23:55:22.979825 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:55:22.980681 systemd[2722]: Reached target default.target - Main User Target. Nov 4 23:55:22.980713 systemd[2722]: Startup finished in 299ms. Nov 4 23:55:22.984213 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:55:23.163885 waagent[2664]: 2025-11-04T23:55:23.163810Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.164681Z INFO Daemon Daemon OS: flatcar 4487.0.0 Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.165067Z INFO Daemon Daemon Python: 3.11.13 Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.165622Z INFO Daemon Daemon Run daemon Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.165965Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4487.0.0' Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.166663Z INFO Daemon Daemon Using waagent for provisioning Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.166839Z INFO Daemon Daemon Activate resource disk Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.167102Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.168986Z INFO Daemon Daemon Found device: None Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.169404Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.169474Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.170239Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 4 23:55:23.174204 waagent[2664]: 2025-11-04T23:55:23.170343Z INFO Daemon Daemon Running default provisioning handler Nov 4 23:55:23.179900 waagent[2664]: 2025-11-04T23:55:23.179587Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 4 23:55:23.180181 waagent[2664]: 2025-11-04T23:55:23.180152Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 4 23:55:23.180541 waagent[2664]: 2025-11-04T23:55:23.180520Z INFO Daemon Daemon cloud-init is enabled: False Nov 4 23:55:23.180793 waagent[2664]: 2025-11-04T23:55:23.180776Z INFO Daemon Daemon Copying ovf-env.xml Nov 4 23:55:23.290355 waagent[2664]: 2025-11-04T23:55:23.290266Z INFO Daemon Daemon Successfully mounted dvd Nov 4 23:55:23.314775 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 4 23:55:23.316710 waagent[2664]: 2025-11-04T23:55:23.316649Z INFO Daemon Daemon Detect protocol endpoint Nov 4 23:55:23.318258 waagent[2664]: 2025-11-04T23:55:23.318218Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 4 23:55:23.319919 waagent[2664]: 2025-11-04T23:55:23.319888Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 4 23:55:23.321740 waagent[2664]: 2025-11-04T23:55:23.321709Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 4 23:55:23.323150 waagent[2664]: 2025-11-04T23:55:23.323119Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 4 23:55:23.324539 waagent[2664]: 2025-11-04T23:55:23.324466Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 4 23:55:23.347788 waagent[2664]: 2025-11-04T23:55:23.347750Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 4 23:55:23.350113 waagent[2664]: 2025-11-04T23:55:23.348141Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 4 23:55:23.350113 waagent[2664]: 2025-11-04T23:55:23.348215Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 4 23:55:23.421428 waagent[2664]: 2025-11-04T23:55:23.421365Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 4 23:55:23.423312 waagent[2664]: 2025-11-04T23:55:23.422786Z INFO Daemon Daemon Forcing an update of the goal state. Nov 4 23:55:23.431459 waagent[2664]: 2025-11-04T23:55:23.431423Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 4 23:55:23.453199 waagent[2664]: 2025-11-04T23:55:23.453172Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 4 23:55:23.455552 waagent[2664]: 2025-11-04T23:55:23.454134Z INFO Daemon Nov 4 23:55:23.455552 waagent[2664]: 2025-11-04T23:55:23.454226Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 75e9bb03-f8b4-454b-854e-105692c99165 eTag: 6420669391545100391 source: Fabric] Nov 4 23:55:23.455552 waagent[2664]: 2025-11-04T23:55:23.454480Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 4 23:55:23.455552 waagent[2664]: 2025-11-04T23:55:23.454774Z INFO Daemon Nov 4 23:55:23.455552 waagent[2664]: 2025-11-04T23:55:23.455013Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 4 23:55:23.463697 waagent[2664]: 2025-11-04T23:55:23.463667Z INFO Daemon Daemon Downloading artifacts profile blob Nov 4 23:55:23.541858 waagent[2664]: 2025-11-04T23:55:23.541777Z INFO Daemon Downloaded certificate {'thumbprint': 'A58E69D4E4886213BB68C28FA1248F1519D2E6E7', 'hasPrivateKey': True} Nov 4 23:55:23.545657 waagent[2664]: 2025-11-04T23:55:23.542665Z INFO Daemon Fetch goal state completed Nov 4 23:55:23.559757 waagent[2664]: 2025-11-04T23:55:23.559695Z INFO Daemon Daemon Starting provisioning Nov 4 23:55:23.562517 waagent[2664]: 2025-11-04T23:55:23.560309Z INFO Daemon Daemon Handle ovf-env.xml. Nov 4 23:55:23.562517 waagent[2664]: 2025-11-04T23:55:23.560872Z INFO Daemon Daemon Set hostname [ci-4487.0.0-n-fda2ba6bd5] Nov 4 23:55:23.575575 waagent[2664]: 2025-11-04T23:55:23.575535Z INFO Daemon Daemon Publish hostname [ci-4487.0.0-n-fda2ba6bd5] Nov 4 23:55:23.577378 waagent[2664]: 2025-11-04T23:55:23.576253Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 4 23:55:23.577378 waagent[2664]: 2025-11-04T23:55:23.576675Z INFO Daemon Daemon Primary interface is [eth0] Nov 4 23:55:23.591821 login[2671]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 23:55:23.596803 systemd-logind[2547]: New session 2 of user core. Nov 4 23:55:23.598365 systemd-networkd[2207]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:23.598373 systemd-networkd[2207]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:23.598427 systemd-networkd[2207]: eth0: DHCP lease lost Nov 4 23:55:23.605279 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:55:23.612195 waagent[2664]: 2025-11-04T23:55:23.612147Z INFO Daemon Daemon Create user account if not exists Nov 4 23:55:23.615662 waagent[2664]: 2025-11-04T23:55:23.614812Z INFO Daemon Daemon User core already exists, skip useradd Nov 4 23:55:23.615775 waagent[2664]: 2025-11-04T23:55:23.615745Z INFO Daemon Daemon Configure sudoer Nov 4 23:55:23.628142 systemd-networkd[2207]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 4 23:55:23.629535 waagent[2664]: 2025-11-04T23:55:23.629492Z INFO Daemon Daemon Configure sshd Nov 4 23:55:23.633451 waagent[2664]: 2025-11-04T23:55:23.633412Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 4 23:55:23.636474 waagent[2664]: 2025-11-04T23:55:23.635853Z INFO Daemon Daemon Deploy ssh public key. Nov 4 23:55:24.741309 waagent[2664]: 2025-11-04T23:55:24.741254Z INFO Daemon Daemon Provisioning complete Nov 4 23:55:24.753022 waagent[2664]: 2025-11-04T23:55:24.752980Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 4 23:55:24.759162 waagent[2664]: 2025-11-04T23:55:24.753696Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 4 23:55:24.759162 waagent[2664]: 2025-11-04T23:55:24.754687Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 4 23:55:24.858850 waagent[2772]: 2025-11-04T23:55:24.858775Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 4 23:55:24.859135 waagent[2772]: 2025-11-04T23:55:24.858873Z INFO ExtHandler ExtHandler OS: flatcar 4487.0.0 Nov 4 23:55:24.859135 waagent[2772]: 2025-11-04T23:55:24.858911Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 4 23:55:24.859135 waagent[2772]: 2025-11-04T23:55:24.858949Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 4 23:55:24.906273 waagent[2772]: 2025-11-04T23:55:24.906221Z INFO ExtHandler ExtHandler Distro: flatcar-4487.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 4 23:55:24.906408 waagent[2772]: 2025-11-04T23:55:24.906379Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 4 23:55:24.906462 waagent[2772]: 2025-11-04T23:55:24.906437Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 4 23:55:24.913341 waagent[2772]: 2025-11-04T23:55:24.913296Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 4 23:55:24.923455 waagent[2772]: 2025-11-04T23:55:24.923426Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 4 23:55:24.923783 waagent[2772]: 2025-11-04T23:55:24.923754Z INFO ExtHandler Nov 4 23:55:24.923831 waagent[2772]: 2025-11-04T23:55:24.923806Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4baff2a4-83b6-4af5-a699-25144564bca3 eTag: 6420669391545100391 source: Fabric] Nov 4 23:55:24.924026 waagent[2772]: 2025-11-04T23:55:24.924000Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 4 23:55:24.924389 waagent[2772]: 2025-11-04T23:55:24.924360Z INFO ExtHandler Nov 4 23:55:24.924431 waagent[2772]: 2025-11-04T23:55:24.924403Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 4 23:55:24.928666 waagent[2772]: 2025-11-04T23:55:24.928637Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 4 23:55:24.999319 waagent[2772]: 2025-11-04T23:55:24.999242Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A58E69D4E4886213BB68C28FA1248F1519D2E6E7', 'hasPrivateKey': True} Nov 4 23:55:24.999615 waagent[2772]: 2025-11-04T23:55:24.999587Z INFO ExtHandler Fetch goal state completed Nov 4 23:55:25.013277 waagent[2772]: 2025-11-04T23:55:25.013234Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 4 23:55:25.017249 waagent[2772]: 2025-11-04T23:55:25.017206Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2772 Nov 4 23:55:25.017362 waagent[2772]: 2025-11-04T23:55:25.017340Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 4 23:55:25.017594 waagent[2772]: 2025-11-04T23:55:25.017572Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 4 23:55:25.018661 waagent[2772]: 2025-11-04T23:55:25.018629Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 4 23:55:25.018946 waagent[2772]: 2025-11-04T23:55:25.018922Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 4 23:55:25.019053 waagent[2772]: 2025-11-04T23:55:25.019032Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 4 23:55:25.019471 waagent[2772]: 2025-11-04T23:55:25.019444Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 4 23:55:25.078123 waagent[2772]: 2025-11-04T23:55:25.078079Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 4 23:55:25.078272 waagent[2772]: 2025-11-04T23:55:25.078251Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 4 23:55:25.083885 waagent[2772]: 2025-11-04T23:55:25.083546Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 4 23:55:25.088774 systemd[1]: Reload requested from client PID 2787 ('systemctl') (unit waagent.service)... Nov 4 23:55:25.088788 systemd[1]: Reloading... Nov 4 23:55:25.178156 zram_generator::config[2830]: No configuration found. Nov 4 23:55:25.336933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 4 23:55:25.354655 systemd[1]: Reloading finished in 265 ms. Nov 4 23:55:25.365137 waagent[2772]: 2025-11-04T23:55:25.363315Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 4 23:55:25.365137 waagent[2772]: 2025-11-04T23:55:25.363456Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 4 23:55:25.749517 waagent[2772]: 2025-11-04T23:55:25.749406Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 4 23:55:25.749755 waagent[2772]: 2025-11-04T23:55:25.749727Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 4 23:55:25.750400 waagent[2772]: 2025-11-04T23:55:25.750359Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 4 23:55:25.750750 waagent[2772]: 2025-11-04T23:55:25.750722Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 4 23:55:25.750928 waagent[2772]: 2025-11-04T23:55:25.750906Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 4 23:55:25.750996 waagent[2772]: 2025-11-04T23:55:25.750962Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 4 23:55:25.751221 waagent[2772]: 2025-11-04T23:55:25.751198Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 4 23:55:25.751336 waagent[2772]: 2025-11-04T23:55:25.751291Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 4 23:55:25.751457 waagent[2772]: 2025-11-04T23:55:25.751425Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 4 23:55:25.751518 waagent[2772]: 2025-11-04T23:55:25.751486Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 4 23:55:25.751616 waagent[2772]: 2025-11-04T23:55:25.751597Z INFO EnvHandler ExtHandler Configure routes Nov 4 23:55:25.751675 waagent[2772]: 2025-11-04T23:55:25.751641Z INFO EnvHandler ExtHandler Gateway:None Nov 4 23:55:25.751716 waagent[2772]: 2025-11-04T23:55:25.751698Z INFO EnvHandler ExtHandler Routes:None Nov 4 23:55:25.751946 waagent[2772]: 2025-11-04T23:55:25.751922Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 4 23:55:25.752312 waagent[2772]: 2025-11-04T23:55:25.752273Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 4 23:55:25.752312 waagent[2772]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 4 23:55:25.752312 waagent[2772]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 4 23:55:25.752312 waagent[2772]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 4 23:55:25.752312 waagent[2772]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 4 23:55:25.752312 waagent[2772]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 4 23:55:25.752312 waagent[2772]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 4 23:55:25.752611 waagent[2772]: 2025-11-04T23:55:25.752477Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 4 23:55:25.752611 waagent[2772]: 2025-11-04T23:55:25.752547Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 4 23:55:25.753079 waagent[2772]: 2025-11-04T23:55:25.753019Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 4 23:55:25.760834 waagent[2772]: 2025-11-04T23:55:25.760799Z INFO ExtHandler ExtHandler Nov 4 23:55:25.760902 waagent[2772]: 2025-11-04T23:55:25.760865Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 088962ed-c7d8-4292-8467-b03a069580bd correlation b1a5a69f-8b32-422d-a3af-157f554a24be created: 2025-11-04T23:54:18.705482Z] Nov 4 23:55:25.761213 waagent[2772]: 2025-11-04T23:55:25.761182Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 4 23:55:25.761754 waagent[2772]: 2025-11-04T23:55:25.761726Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 4 23:55:25.817634 waagent[2772]: 2025-11-04T23:55:25.817489Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 4 23:55:25.817634 waagent[2772]: Try `iptables -h' or 'iptables --help' for more information.) Nov 4 23:55:25.817930 waagent[2772]: 2025-11-04T23:55:25.817904Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CFB5E077-3AE7-4FB7-84ED-25B493CA192C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 4 23:55:25.877018 waagent[2772]: 2025-11-04T23:55:25.876972Z INFO MonitorHandler ExtHandler Network interfaces: Nov 4 23:55:25.877018 waagent[2772]: Executing ['ip', '-a', '-o', 'link']: Nov 4 23:55:25.877018 waagent[2772]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 4 23:55:25.877018 waagent[2772]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fc:bf:28 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e52fcbf28 Nov 4 23:55:25.877018 waagent[2772]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fc:bf:28 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 4 23:55:25.877018 waagent[2772]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 4 23:55:25.877018 waagent[2772]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 4 23:55:25.877018 waagent[2772]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 4 23:55:25.877018 waagent[2772]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 4 23:55:25.877018 waagent[2772]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 4 23:55:25.877018 waagent[2772]: 2: eth0 inet6 fe80::7e1e:52ff:fefc:bf28/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 4 23:55:25.924345 waagent[2772]: 2025-11-04T23:55:25.924295Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 4 23:55:25.924345 waagent[2772]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:55:25.924345 waagent[2772]: pkts bytes target prot opt in out source destination Nov 4 23:55:25.924345 waagent[2772]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:55:25.924345 waagent[2772]: pkts bytes target prot opt in out source destination Nov 4 23:55:25.924345 waagent[2772]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Nov 4 23:55:25.924345 waagent[2772]: pkts bytes target prot opt in out source destination Nov 4 23:55:25.924345 waagent[2772]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 4 23:55:25.924345 waagent[2772]: 5 468 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 4 23:55:25.924345 waagent[2772]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 4 23:55:25.927030 waagent[2772]: 2025-11-04T23:55:25.926985Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 4 23:55:25.927030 waagent[2772]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:55:25.927030 waagent[2772]: pkts bytes target prot opt in out source destination Nov 4 23:55:25.927030 waagent[2772]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 4 23:55:25.927030 waagent[2772]: pkts bytes target prot opt in out source destination Nov 4 23:55:25.927030 waagent[2772]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Nov 4 23:55:25.927030 waagent[2772]: pkts bytes target prot opt in out source destination Nov 4 23:55:25.927030 waagent[2772]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 4 23:55:25.927030 waagent[2772]: 9 816 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 4 23:55:25.927030 waagent[2772]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 4 23:55:32.417219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:55:32.418573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:32.945152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:32.954274 (kubelet)[2925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:32.989600 kubelet[2925]: E1104 23:55:32.989561 2925 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:32.992267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:32.992399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:32.992711 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.4M memory peak. Nov 4 23:55:35.060597 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:55:35.061720 systemd[1]: Started sshd@0-10.200.8.17:22-10.200.16.10:50468.service - OpenSSH per-connection server daemon (10.200.16.10:50468). Nov 4 23:55:35.885490 sshd[2934]: Accepted publickey for core from 10.200.16.10 port 50468 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:35.886614 sshd-session[2934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:35.891046 systemd-logind[2547]: New session 3 of user core. Nov 4 23:55:35.898262 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:55:36.455865 systemd[1]: Started sshd@1-10.200.8.17:22-10.200.16.10:50476.service - OpenSSH per-connection server daemon (10.200.16.10:50476). Nov 4 23:55:37.085577 sshd[2940]: Accepted publickey for core from 10.200.16.10 port 50476 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:37.086811 sshd-session[2940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:37.091280 systemd-logind[2547]: New session 4 of user core. Nov 4 23:55:37.098245 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:55:37.527819 sshd[2943]: Connection closed by 10.200.16.10 port 50476 Nov 4 23:55:37.528418 sshd-session[2940]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:37.531816 systemd[1]: sshd@1-10.200.8.17:22-10.200.16.10:50476.service: Deactivated successfully. Nov 4 23:55:37.533320 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:55:37.533969 systemd-logind[2547]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:55:37.535012 systemd-logind[2547]: Removed session 4. Nov 4 23:55:37.644620 systemd[1]: Started sshd@2-10.200.8.17:22-10.200.16.10:50480.service - OpenSSH per-connection server daemon (10.200.16.10:50480). Nov 4 23:55:38.283459 sshd[2949]: Accepted publickey for core from 10.200.16.10 port 50480 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:38.284644 sshd-session[2949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:38.289228 systemd-logind[2547]: New session 5 of user core. Nov 4 23:55:38.298265 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:55:38.729957 sshd[2952]: Connection closed by 10.200.16.10 port 50480 Nov 4 23:55:38.730546 sshd-session[2949]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:38.733974 systemd[1]: sshd@2-10.200.8.17:22-10.200.16.10:50480.service: Deactivated successfully. Nov 4 23:55:38.735471 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:55:38.736119 systemd-logind[2547]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:55:38.737291 systemd-logind[2547]: Removed session 5. Nov 4 23:55:38.840847 systemd[1]: Started sshd@3-10.200.8.17:22-10.200.16.10:50486.service - OpenSSH per-connection server daemon (10.200.16.10:50486). Nov 4 23:55:39.470164 sshd[2958]: Accepted publickey for core from 10.200.16.10 port 50486 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:39.471247 sshd-session[2958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:39.475160 systemd-logind[2547]: New session 6 of user core. Nov 4 23:55:39.488258 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:55:39.913887 sshd[2961]: Connection closed by 10.200.16.10 port 50486 Nov 4 23:55:39.914656 sshd-session[2958]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:39.917730 systemd[1]: sshd@3-10.200.8.17:22-10.200.16.10:50486.service: Deactivated successfully. Nov 4 23:55:39.919298 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:55:39.920683 systemd-logind[2547]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:55:39.921783 systemd-logind[2547]: Removed session 6. Nov 4 23:55:40.030787 systemd[1]: Started sshd@4-10.200.8.17:22-10.200.16.10:37010.service - OpenSSH per-connection server daemon (10.200.16.10:37010). Nov 4 23:55:40.668006 sshd[2967]: Accepted publickey for core from 10.200.16.10 port 37010 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:40.669163 sshd-session[2967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:40.673779 systemd-logind[2547]: New session 7 of user core. Nov 4 23:55:40.681291 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:55:41.185042 sudo[2971]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:55:41.185289 sudo[2971]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:41.207894 sudo[2971]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:41.314533 sshd[2970]: Connection closed by 10.200.16.10 port 37010 Nov 4 23:55:41.315298 sshd-session[2967]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:41.319058 systemd[1]: sshd@4-10.200.8.17:22-10.200.16.10:37010.service: Deactivated successfully. Nov 4 23:55:41.320643 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:55:41.321426 systemd-logind[2547]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:55:41.322716 systemd-logind[2547]: Removed session 7. Nov 4 23:55:41.432932 systemd[1]: Started sshd@5-10.200.8.17:22-10.200.16.10:37014.service - OpenSSH per-connection server daemon (10.200.16.10:37014). Nov 4 23:55:42.066614 sshd[2977]: Accepted publickey for core from 10.200.16.10 port 37014 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:42.067777 sshd-session[2977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:42.072265 systemd-logind[2547]: New session 8 of user core. Nov 4 23:55:42.082270 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:55:42.411882 sudo[2982]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:55:42.412150 sudo[2982]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:42.418396 sudo[2982]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:42.422977 sudo[2981]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:55:42.423244 sudo[2981]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:42.431400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:55:42.462728 augenrules[3004]: No rules Nov 4 23:55:42.463635 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:55:42.463905 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:55:42.464863 sudo[2981]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:42.567726 sshd[2980]: Connection closed by 10.200.16.10 port 37014 Nov 4 23:55:42.568174 sshd-session[2977]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:42.570836 systemd[1]: sshd@5-10.200.8.17:22-10.200.16.10:37014.service: Deactivated successfully. Nov 4 23:55:42.572338 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:55:42.573896 systemd-logind[2547]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:55:42.574569 systemd-logind[2547]: Removed session 8. Nov 4 23:55:42.684639 systemd[1]: Started sshd@6-10.200.8.17:22-10.200.16.10:37024.service - OpenSSH per-connection server daemon (10.200.16.10:37024). Nov 4 23:55:43.167249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:55:43.168620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:43.315029 sshd[3013]: Accepted publickey for core from 10.200.16.10 port 37024 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:55:43.316148 sshd-session[3013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:43.320143 systemd-logind[2547]: New session 9 of user core. Nov 4 23:55:43.328311 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:55:43.660602 sudo[3022]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:55:43.660843 sudo[3022]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:43.670597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:43.684263 (kubelet)[3030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:43.717327 kubelet[3030]: E1104 23:55:43.717299 3030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:43.719194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:43.719325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:43.719630 systemd[1]: kubelet.service: Consumed 128ms CPU time, 108.4M memory peak. Nov 4 23:55:44.325916 chronyd[2526]: Selected source PHC0 Nov 4 23:55:45.602054 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:55:45.615356 (dockerd)[3050]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:55:46.411627 dockerd[3050]: time="2025-11-04T23:55:46.411044795Z" level=info msg="Starting up" Nov 4 23:55:46.412742 dockerd[3050]: time="2025-11-04T23:55:46.412706983Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:55:46.421999 dockerd[3050]: time="2025-11-04T23:55:46.421949852Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:55:46.448993 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1639369878-merged.mount: Deactivated successfully. Nov 4 23:55:46.541534 systemd[1]: var-lib-docker-metacopy\x2dcheck1979193965-merged.mount: Deactivated successfully. Nov 4 23:55:46.574357 dockerd[3050]: time="2025-11-04T23:55:46.574325881Z" level=info msg="Loading containers: start." Nov 4 23:55:46.642110 kernel: Initializing XFRM netlink socket Nov 4 23:55:47.061616 systemd-networkd[2207]: docker0: Link UP Nov 4 23:55:47.078361 dockerd[3050]: time="2025-11-04T23:55:47.078323806Z" level=info msg="Loading containers: done." Nov 4 23:55:47.155598 dockerd[3050]: time="2025-11-04T23:55:47.155538746Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:55:47.155737 dockerd[3050]: time="2025-11-04T23:55:47.155639721Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:55:47.155737 dockerd[3050]: time="2025-11-04T23:55:47.155712038Z" level=info msg="Initializing buildkit" Nov 4 23:55:47.197563 dockerd[3050]: time="2025-11-04T23:55:47.197518792Z" level=info msg="Completed buildkit initialization" Nov 4 23:55:47.203979 dockerd[3050]: time="2025-11-04T23:55:47.203936280Z" level=info msg="Daemon has completed initialization" Nov 4 23:55:47.204169 dockerd[3050]: time="2025-11-04T23:55:47.204101154Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:55:47.204240 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:55:48.116464 containerd[2578]: time="2025-11-04T23:55:48.116426095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 23:55:48.912035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956205623.mount: Deactivated successfully. Nov 4 23:55:50.015487 containerd[2578]: time="2025-11-04T23:55:50.015435241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:50.018194 containerd[2578]: time="2025-11-04T23:55:50.018162235Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Nov 4 23:55:50.021956 containerd[2578]: time="2025-11-04T23:55:50.021909679Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:50.026599 containerd[2578]: time="2025-11-04T23:55:50.026336726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:50.027071 containerd[2578]: time="2025-11-04T23:55:50.027045039Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.910581625s" Nov 4 23:55:50.027138 containerd[2578]: time="2025-11-04T23:55:50.027085012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 4 23:55:50.027690 containerd[2578]: time="2025-11-04T23:55:50.027671471Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 23:55:51.185587 containerd[2578]: time="2025-11-04T23:55:51.185539879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:51.187915 containerd[2578]: time="2025-11-04T23:55:51.187877487Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Nov 4 23:55:51.190894 containerd[2578]: time="2025-11-04T23:55:51.190840885Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:51.195112 containerd[2578]: time="2025-11-04T23:55:51.194805145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:51.195667 containerd[2578]: time="2025-11-04T23:55:51.195460793Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.167761266s" Nov 4 23:55:51.195667 containerd[2578]: time="2025-11-04T23:55:51.195492563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 4 23:55:51.196221 containerd[2578]: time="2025-11-04T23:55:51.196201107Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 23:55:52.176949 containerd[2578]: time="2025-11-04T23:55:52.176900895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:52.179111 containerd[2578]: time="2025-11-04T23:55:52.179077733Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Nov 4 23:55:52.182991 containerd[2578]: time="2025-11-04T23:55:52.182951008Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:52.187895 containerd[2578]: time="2025-11-04T23:55:52.187849807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:52.189102 containerd[2578]: time="2025-11-04T23:55:52.188618912Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 992.389943ms" Nov 4 23:55:52.189102 containerd[2578]: time="2025-11-04T23:55:52.188655103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 4 23:55:52.189424 containerd[2578]: time="2025-11-04T23:55:52.189404125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 23:55:53.142064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345014893.mount: Deactivated successfully. Nov 4 23:55:53.713707 containerd[2578]: time="2025-11-04T23:55:53.713662186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:53.716236 containerd[2578]: time="2025-11-04T23:55:53.716201186Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Nov 4 23:55:53.720338 containerd[2578]: time="2025-11-04T23:55:53.720299523Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:53.726869 containerd[2578]: time="2025-11-04T23:55:53.726444145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:53.726869 containerd[2578]: time="2025-11-04T23:55:53.726736328Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.537305105s" Nov 4 23:55:53.726869 containerd[2578]: time="2025-11-04T23:55:53.726761504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 4 23:55:53.727364 containerd[2578]: time="2025-11-04T23:55:53.727343517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 23:55:53.917169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:55:53.918678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:54.438202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:54.444273 (kubelet)[3338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:54.476634 kubelet[3338]: E1104 23:55:54.476584 3338 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:54.478121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:54.478249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:54.478600 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.2M memory peak. Nov 4 23:55:54.870050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905946634.mount: Deactivated successfully. Nov 4 23:55:55.861834 containerd[2578]: time="2025-11-04T23:55:55.861785509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:55.864502 containerd[2578]: time="2025-11-04T23:55:55.864465269Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Nov 4 23:55:55.867343 containerd[2578]: time="2025-11-04T23:55:55.867301916Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:55.871508 containerd[2578]: time="2025-11-04T23:55:55.871321817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:55.871956 containerd[2578]: time="2025-11-04T23:55:55.871934308Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.144564184s" Nov 4 23:55:55.871997 containerd[2578]: time="2025-11-04T23:55:55.871965898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 4 23:55:55.872642 containerd[2578]: time="2025-11-04T23:55:55.872616344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 23:55:56.382381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135245137.mount: Deactivated successfully. Nov 4 23:55:56.400704 containerd[2578]: time="2025-11-04T23:55:56.400661706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:56.403296 containerd[2578]: time="2025-11-04T23:55:56.403238560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Nov 4 23:55:56.407015 containerd[2578]: time="2025-11-04T23:55:56.406671731Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:56.411914 containerd[2578]: time="2025-11-04T23:55:56.411887544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:56.412322 containerd[2578]: time="2025-11-04T23:55:56.412301669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 539.657677ms" Nov 4 23:55:56.412369 containerd[2578]: time="2025-11-04T23:55:56.412329490Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 4 23:55:56.412804 containerd[2578]: time="2025-11-04T23:55:56.412781831Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 23:55:59.097371 containerd[2578]: time="2025-11-04T23:55:59.097327981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:59.101712 containerd[2578]: time="2025-11-04T23:55:59.101673885Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Nov 4 23:55:59.102101 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 4 23:55:59.105166 containerd[2578]: time="2025-11-04T23:55:59.104833794Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:59.109548 containerd[2578]: time="2025-11-04T23:55:59.109517298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:59.110457 containerd[2578]: time="2025-11-04T23:55:59.110429153Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.697623224s" Nov 4 23:55:59.110519 containerd[2578]: time="2025-11-04T23:55:59.110458824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 4 23:56:02.212292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:02.212441 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.2M memory peak. Nov 4 23:56:02.214648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:02.238542 systemd[1]: Reload requested from client PID 3471 ('systemctl') (unit session-9.scope)... Nov 4 23:56:02.238559 systemd[1]: Reloading... Nov 4 23:56:02.337149 zram_generator::config[3519]: No configuration found. Nov 4 23:56:02.524315 systemd[1]: Reloading finished in 285 ms. Nov 4 23:56:02.671916 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:56:02.671997 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:56:02.672263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:02.672310 systemd[1]: kubelet.service: Consumed 74ms CPU time, 77.8M memory peak. Nov 4 23:56:02.674147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:03.208235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:03.218394 (kubelet)[3586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:56:03.253892 kubelet[3586]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:56:03.253892 kubelet[3586]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:56:03.253892 kubelet[3586]: I1104 23:56:03.253534 3586 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:56:03.422057 kubelet[3586]: I1104 23:56:03.422019 3586 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 23:56:03.422057 kubelet[3586]: I1104 23:56:03.422044 3586 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:56:03.422057 kubelet[3586]: I1104 23:56:03.422066 3586 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 23:56:03.422057 kubelet[3586]: I1104 23:56:03.422071 3586 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:56:03.422370 kubelet[3586]: I1104 23:56:03.422356 3586 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:56:03.435298 kubelet[3586]: E1104 23:56:03.435249 3586 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:56:03.435816 kubelet[3586]: I1104 23:56:03.435778 3586 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:56:03.442433 kubelet[3586]: I1104 23:56:03.442412 3586 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:56:03.444405 kubelet[3586]: I1104 23:56:03.444386 3586 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 23:56:03.445668 kubelet[3586]: I1104 23:56:03.445637 3586 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:56:03.445809 kubelet[3586]: I1104 23:56:03.445667 3586 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-fda2ba6bd5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:56:03.445925 kubelet[3586]: I1104 23:56:03.445813 3586 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:56:03.445925 kubelet[3586]: I1104 23:56:03.445823 3586 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 23:56:03.445925 kubelet[3586]: I1104 23:56:03.445908 3586 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 23:56:03.454270 kubelet[3586]: I1104 23:56:03.454247 3586 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:03.454710 kubelet[3586]: I1104 23:56:03.454388 3586 kubelet.go:475] "Attempting to sync node with API server" Nov 4 23:56:03.454710 kubelet[3586]: I1104 23:56:03.454401 3586 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:56:03.454710 kubelet[3586]: I1104 23:56:03.454422 3586 kubelet.go:387] "Adding apiserver pod source" Nov 4 23:56:03.454710 kubelet[3586]: I1104 23:56:03.454448 3586 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:56:03.459157 kubelet[3586]: E1104 23:56:03.457905 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-fda2ba6bd5&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:56:03.459157 kubelet[3586]: E1104 23:56:03.458008 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:56:03.459157 kubelet[3586]: I1104 23:56:03.458363 3586 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:56:03.459157 kubelet[3586]: I1104 23:56:03.458836 3586 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:56:03.459157 kubelet[3586]: I1104 23:56:03.458863 3586 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 23:56:03.459157 kubelet[3586]: W1104 23:56:03.458906 3586 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:56:03.463269 kubelet[3586]: I1104 23:56:03.463254 3586 server.go:1262] "Started kubelet" Nov 4 23:56:03.464622 kubelet[3586]: I1104 23:56:03.463984 3586 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:56:03.468716 kubelet[3586]: E1104 23:56:03.467379 3586 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-fda2ba6bd5.1874f30680b2408f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-fda2ba6bd5,UID:ci-4487.0.0-n-fda2ba6bd5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-fda2ba6bd5,},FirstTimestamp:2025-11-04 23:56:03.463225487 +0000 UTC m=+0.241884637,LastTimestamp:2025-11-04 23:56:03.463225487 +0000 UTC m=+0.241884637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-fda2ba6bd5,}" Nov 4 23:56:03.469036 kubelet[3586]: I1104 23:56:03.469016 3586 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:56:03.470163 kubelet[3586]: I1104 23:56:03.470147 3586 server.go:310] "Adding debug handlers to kubelet server" Nov 4 23:56:03.471889 kubelet[3586]: I1104 23:56:03.471636 3586 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 23:56:03.471889 kubelet[3586]: E1104 23:56:03.471834 3586 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:03.475112 kubelet[3586]: I1104 23:56:03.473534 3586 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:56:03.475112 kubelet[3586]: I1104 23:56:03.473576 3586 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 23:56:03.475112 kubelet[3586]: I1104 23:56:03.473717 3586 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:56:03.475112 kubelet[3586]: I1104 23:56:03.473921 3586 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:56:03.475112 kubelet[3586]: I1104 23:56:03.474220 3586 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 23:56:03.475112 kubelet[3586]: I1104 23:56:03.474256 3586 reconciler.go:29] "Reconciler: start to sync state" Nov 4 23:56:03.475908 kubelet[3586]: E1104 23:56:03.475881 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="200ms" Nov 4 23:56:03.476481 kubelet[3586]: E1104 23:56:03.476442 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:56:03.477636 kubelet[3586]: I1104 23:56:03.477605 3586 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:56:03.478394 kubelet[3586]: I1104 23:56:03.478381 3586 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:56:03.478394 kubelet[3586]: I1104 23:56:03.478392 3586 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:56:03.493818 kubelet[3586]: I1104 23:56:03.493269 3586 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:56:03.493818 kubelet[3586]: I1104 23:56:03.493281 3586 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:56:03.493818 kubelet[3586]: I1104 23:56:03.493296 3586 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:03.503261 kubelet[3586]: I1104 23:56:03.503244 3586 policy_none.go:49] "None policy: Start" Nov 4 23:56:03.503261 kubelet[3586]: I1104 23:56:03.503261 3586 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 23:56:03.503362 kubelet[3586]: I1104 23:56:03.503271 3586 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 23:56:03.510012 kubelet[3586]: I1104 23:56:03.509993 3586 policy_none.go:47] "Start" Nov 4 23:56:03.513031 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:56:03.523352 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:56:03.532380 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:56:03.533421 kubelet[3586]: E1104 23:56:03.533396 3586 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:56:03.533565 kubelet[3586]: I1104 23:56:03.533548 3586 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:56:03.533598 kubelet[3586]: I1104 23:56:03.533563 3586 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:56:03.534855 kubelet[3586]: I1104 23:56:03.533917 3586 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:56:03.537105 kubelet[3586]: E1104 23:56:03.536316 3586 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:56:03.537105 kubelet[3586]: E1104 23:56:03.536357 3586 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:03.540290 kubelet[3586]: I1104 23:56:03.540263 3586 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 23:56:03.543852 kubelet[3586]: I1104 23:56:03.543037 3586 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 23:56:03.543852 kubelet[3586]: I1104 23:56:03.543056 3586 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 23:56:03.543852 kubelet[3586]: I1104 23:56:03.543124 3586 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 23:56:03.543852 kubelet[3586]: E1104 23:56:03.543154 3586 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 4 23:56:03.543852 kubelet[3586]: E1104 23:56:03.543772 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:56:03.635640 kubelet[3586]: I1104 23:56:03.635619 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.635937 kubelet[3586]: E1104 23:56:03.635917 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.676014 systemd[1]: Created slice kubepods-burstable-pod22b357fb7d27d32e956af85a79b45791.slice - libcontainer container kubepods-burstable-pod22b357fb7d27d32e956af85a79b45791.slice. Nov 4 23:56:03.676869 kubelet[3586]: E1104 23:56:03.676841 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="400ms" Nov 4 23:56:03.682620 kubelet[3586]: E1104 23:56:03.682592 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.690722 systemd[1]: Created slice kubepods-burstable-pode89b2dc533c3ff0ee581b5c654a027ac.slice - libcontainer container kubepods-burstable-pode89b2dc533c3ff0ee581b5c654a027ac.slice. Nov 4 23:56:03.692333 kubelet[3586]: E1104 23:56:03.692313 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.705774 systemd[1]: Created slice kubepods-burstable-podb7655705bc9a47fa2907590f65f7426b.slice - libcontainer container kubepods-burstable-podb7655705bc9a47fa2907590f65f7426b.slice. Nov 4 23:56:03.707457 kubelet[3586]: E1104 23:56:03.707433 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776162 kubelet[3586]: I1104 23:56:03.776023 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22b357fb7d27d32e956af85a79b45791-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"22b357fb7d27d32e956af85a79b45791\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776162 kubelet[3586]: I1104 23:56:03.776059 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e89b2dc533c3ff0ee581b5c654a027ac-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"e89b2dc533c3ff0ee581b5c654a027ac\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776162 kubelet[3586]: I1104 23:56:03.776078 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e89b2dc533c3ff0ee581b5c654a027ac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"e89b2dc533c3ff0ee581b5c654a027ac\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776162 kubelet[3586]: I1104 23:56:03.776111 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776162 kubelet[3586]: I1104 23:56:03.776129 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776315 kubelet[3586]: I1104 23:56:03.776145 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e89b2dc533c3ff0ee581b5c654a027ac-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"e89b2dc533c3ff0ee581b5c654a027ac\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776315 kubelet[3586]: I1104 23:56:03.776159 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776315 kubelet[3586]: I1104 23:56:03.776176 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.776315 kubelet[3586]: I1104 23:56:03.776192 3586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.837513 kubelet[3586]: I1104 23:56:03.837485 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:03.837802 kubelet[3586]: E1104 23:56:03.837773 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:04.077617 kubelet[3586]: E1104 23:56:04.077499 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="800ms" Nov 4 23:56:04.239745 kubelet[3586]: I1104 23:56:04.239715 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:04.240070 kubelet[3586]: E1104 23:56:04.240047 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:04.479853 kubelet[3586]: E1104 23:56:04.479814 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:56:04.749998 kubelet[3586]: E1104 23:56:04.749663 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-fda2ba6bd5&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:56:04.750134 containerd[2578]: time="2025-11-04T23:56:04.749829242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-fda2ba6bd5,Uid:22b357fb7d27d32e956af85a79b45791,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:04.835245 containerd[2578]: time="2025-11-04T23:56:04.835171943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-fda2ba6bd5,Uid:e89b2dc533c3ff0ee581b5c654a027ac,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:04.852936 kubelet[3586]: E1104 23:56:04.852903 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:56:04.873418 containerd[2578]: time="2025-11-04T23:56:04.873376889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5,Uid:b7655705bc9a47fa2907590f65f7426b,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:04.878133 kubelet[3586]: E1104 23:56:04.878075 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="1.6s" Nov 4 23:56:04.896605 kubelet[3586]: E1104 23:56:04.896557 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:56:05.041927 kubelet[3586]: I1104 23:56:05.041842 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:05.042409 kubelet[3586]: E1104 23:56:05.042371 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:05.516474 kubelet[3586]: E1104 23:56:05.516433 3586 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:56:05.658395 update_engine[2548]: I20251104 23:56:05.658304 2548 update_attempter.cc:509] Updating boot flags... Nov 4 23:56:06.275192 kubelet[3586]: E1104 23:56:06.275058 3586 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-fda2ba6bd5.1874f30680b2408f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-fda2ba6bd5,UID:ci-4487.0.0-n-fda2ba6bd5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-fda2ba6bd5,},FirstTimestamp:2025-11-04 23:56:03.463225487 +0000 UTC m=+0.241884637,LastTimestamp:2025-11-04 23:56:03.463225487 +0000 UTC m=+0.241884637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-fda2ba6bd5,}" Nov 4 23:56:06.479284 kubelet[3586]: E1104 23:56:06.479233 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="3.2s" Nov 4 23:56:06.602339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203373443.mount: Deactivated successfully. Nov 4 23:56:06.631190 containerd[2578]: time="2025-11-04T23:56:06.631150384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:06.644478 kubelet[3586]: I1104 23:56:06.644454 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:06.644798 kubelet[3586]: E1104 23:56:06.644735 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:06.648866 containerd[2578]: time="2025-11-04T23:56:06.648837401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 4 23:56:06.654059 containerd[2578]: time="2025-11-04T23:56:06.654031178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:06.660071 containerd[2578]: time="2025-11-04T23:56:06.660034572Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:06.667460 containerd[2578]: time="2025-11-04T23:56:06.667263481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 23:56:06.672308 containerd[2578]: time="2025-11-04T23:56:06.672276151Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:06.677950 containerd[2578]: time="2025-11-04T23:56:06.677919484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:06.678425 containerd[2578]: time="2025-11-04T23:56:06.678403454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.819275623s" Nov 4 23:56:06.681601 containerd[2578]: time="2025-11-04T23:56:06.681573738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 23:56:06.682635 containerd[2578]: time="2025-11-04T23:56:06.682611841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.743909582s" Nov 4 23:56:06.683520 containerd[2578]: time="2025-11-04T23:56:06.683498164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.728067682s" Nov 4 23:56:07.026906 kubelet[3586]: E1104 23:56:07.026867 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:56:07.180286 kubelet[3586]: E1104 23:56:07.180245 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-fda2ba6bd5&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:56:07.650384 kubelet[3586]: E1104 23:56:07.650347 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:56:08.407523 kubelet[3586]: E1104 23:56:07.996224 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:56:09.680208 kubelet[3586]: E1104 23:56:09.680154 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="6.4s" Nov 4 23:56:09.682965 kubelet[3586]: E1104 23:56:09.682940 3586 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:56:09.847295 kubelet[3586]: I1104 23:56:09.847263 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:09.847573 kubelet[3586]: E1104 23:56:09.847549 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:10.221113 containerd[2578]: time="2025-11-04T23:56:10.221045705Z" level=info msg="connecting to shim e1f3e8d97e331ce9a7030bedca00fe059041d1aed9a78061cbaa8e85920608c5" address="unix:///run/containerd/s/e9c56a58a8d2538b2994e9693a24bf50e1c032b7c2d61bc43a64136434cba4c4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:10.245303 systemd[1]: Started cri-containerd-e1f3e8d97e331ce9a7030bedca00fe059041d1aed9a78061cbaa8e85920608c5.scope - libcontainer container e1f3e8d97e331ce9a7030bedca00fe059041d1aed9a78061cbaa8e85920608c5. Nov 4 23:56:11.084163 kubelet[3586]: E1104 23:56:10.945427 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-fda2ba6bd5&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:56:11.137467 containerd[2578]: time="2025-11-04T23:56:11.137430007Z" level=info msg="connecting to shim 9baa1e7fddc092f64a5b8583240204480a274911475b2f558c6918af09703833" address="unix:///run/containerd/s/dbb8cca826b412a4c5f7bc30b6650789dea68d9547eaadf887bcd2a68a0ad254" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:11.161248 systemd[1]: Started cri-containerd-9baa1e7fddc092f64a5b8583240204480a274911475b2f558c6918af09703833.scope - libcontainer container 9baa1e7fddc092f64a5b8583240204480a274911475b2f558c6918af09703833. Nov 4 23:56:11.163550 containerd[2578]: time="2025-11-04T23:56:11.163509480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-fda2ba6bd5,Uid:22b357fb7d27d32e956af85a79b45791,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1f3e8d97e331ce9a7030bedca00fe059041d1aed9a78061cbaa8e85920608c5\"" Nov 4 23:56:11.215339 containerd[2578]: time="2025-11-04T23:56:11.215315521Z" level=info msg="CreateContainer within sandbox \"e1f3e8d97e331ce9a7030bedca00fe059041d1aed9a78061cbaa8e85920608c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:56:11.219922 containerd[2578]: time="2025-11-04T23:56:11.219882567Z" level=info msg="connecting to shim 4a40819b90cf86e23255c3275753d82b7d71ab8fb1c42d97359b111a3c35ed46" address="unix:///run/containerd/s/a2167cd2bd051afa9291d1f6eb6dd61419236ac4195c8cb4db5907508f0e11d4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:11.242239 systemd[1]: Started cri-containerd-4a40819b90cf86e23255c3275753d82b7d71ab8fb1c42d97359b111a3c35ed46.scope - libcontainer container 4a40819b90cf86e23255c3275753d82b7d71ab8fb1c42d97359b111a3c35ed46. Nov 4 23:56:11.984677 kubelet[3586]: E1104 23:56:11.984637 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:56:12.354835 kubelet[3586]: E1104 23:56:12.354727 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:56:12.748295 kubelet[3586]: E1104 23:56:12.748262 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:56:13.164071 containerd[2578]: time="2025-11-04T23:56:13.163902319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-fda2ba6bd5,Uid:e89b2dc533c3ff0ee581b5c654a027ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"9baa1e7fddc092f64a5b8583240204480a274911475b2f558c6918af09703833\"" Nov 4 23:56:13.537193 kubelet[3586]: E1104 23:56:13.537165 3586 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:15.362130 containerd[2578]: time="2025-11-04T23:56:15.361659692Z" level=info msg="CreateContainer within sandbox \"9baa1e7fddc092f64a5b8583240204480a274911475b2f558c6918af09703833\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:56:16.080976 kubelet[3586]: E1104 23:56:16.080936 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="7s" Nov 4 23:56:16.250016 kubelet[3586]: I1104 23:56:16.249991 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:16.250353 kubelet[3586]: E1104 23:56:16.250306 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:16.275830 kubelet[3586]: E1104 23:56:16.275761 3586 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-fda2ba6bd5.1874f30680b2408f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-fda2ba6bd5,UID:ci-4487.0.0-n-fda2ba6bd5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-fda2ba6bd5,},FirstTimestamp:2025-11-04 23:56:03.463225487 +0000 UTC m=+0.241884637,LastTimestamp:2025-11-04 23:56:03.463225487 +0000 UTC m=+0.241884637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-fda2ba6bd5,}" Nov 4 23:56:17.062700 containerd[2578]: time="2025-11-04T23:56:17.062426632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5,Uid:b7655705bc9a47fa2907590f65f7426b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a40819b90cf86e23255c3275753d82b7d71ab8fb1c42d97359b111a3c35ed46\"" Nov 4 23:56:18.352251 kubelet[3586]: E1104 23:56:18.352212 3586 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:56:18.958135 kubelet[3586]: E1104 23:56:18.611687 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-fda2ba6bd5&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:56:18.965366 containerd[2578]: time="2025-11-04T23:56:18.965317067Z" level=info msg="CreateContainer within sandbox \"4a40819b90cf86e23255c3275753d82b7d71ab8fb1c42d97359b111a3c35ed46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:56:20.798793 kubelet[3586]: E1104 23:56:20.798754 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:56:22.366108 containerd[2578]: time="2025-11-04T23:56:22.365237799Z" level=info msg="Container 5f4fa32df2ecd90e474e4c336281f7dce0b5b58609bf4360da90e4f0360d2395: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:22.495173 containerd[2578]: time="2025-11-04T23:56:22.494220386Z" level=info msg="Container e010041bcdfe645af48a858705be0c2de0b0b73bfaf86e4d13c9c00886d18f80: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:22.565234 containerd[2578]: time="2025-11-04T23:56:22.565201366Z" level=info msg="Container c29abd6f3c9cf2c78d82f3c44f78cbbb4887d920aca0cf403bb1e75a8c346941: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:22.916623 kubelet[3586]: E1104 23:56:22.638376 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:56:23.081727 kubelet[3586]: E1104 23:56:23.081679 3586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-fda2ba6bd5?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="7s" Nov 4 23:56:23.094101 kubelet[3586]: E1104 23:56:23.094065 3586 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:56:23.196232 containerd[2578]: time="2025-11-04T23:56:23.196145587Z" level=info msg="CreateContainer within sandbox \"9baa1e7fddc092f64a5b8583240204480a274911475b2f558c6918af09703833\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f4fa32df2ecd90e474e4c336281f7dce0b5b58609bf4360da90e4f0360d2395\"" Nov 4 23:56:23.196967 containerd[2578]: time="2025-11-04T23:56:23.196941899Z" level=info msg="StartContainer for \"5f4fa32df2ecd90e474e4c336281f7dce0b5b58609bf4360da90e4f0360d2395\"" Nov 4 23:56:23.198052 containerd[2578]: time="2025-11-04T23:56:23.198014664Z" level=info msg="connecting to shim 5f4fa32df2ecd90e474e4c336281f7dce0b5b58609bf4360da90e4f0360d2395" address="unix:///run/containerd/s/dbb8cca826b412a4c5f7bc30b6650789dea68d9547eaadf887bcd2a68a0ad254" protocol=ttrpc version=3 Nov 4 23:56:23.222255 systemd[1]: Started cri-containerd-5f4fa32df2ecd90e474e4c336281f7dce0b5b58609bf4360da90e4f0360d2395.scope - libcontainer container 5f4fa32df2ecd90e474e4c336281f7dce0b5b58609bf4360da90e4f0360d2395. Nov 4 23:56:23.227162 containerd[2578]: time="2025-11-04T23:56:23.227131555Z" level=info msg="CreateContainer within sandbox \"e1f3e8d97e331ce9a7030bedca00fe059041d1aed9a78061cbaa8e85920608c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e010041bcdfe645af48a858705be0c2de0b0b73bfaf86e4d13c9c00886d18f80\"" Nov 4 23:56:23.227959 containerd[2578]: time="2025-11-04T23:56:23.227936308Z" level=info msg="StartContainer for \"e010041bcdfe645af48a858705be0c2de0b0b73bfaf86e4d13c9c00886d18f80\"" Nov 4 23:56:23.228913 containerd[2578]: time="2025-11-04T23:56:23.228889886Z" level=info msg="connecting to shim e010041bcdfe645af48a858705be0c2de0b0b73bfaf86e4d13c9c00886d18f80" address="unix:///run/containerd/s/e9c56a58a8d2538b2994e9693a24bf50e1c032b7c2d61bc43a64136434cba4c4" protocol=ttrpc version=3 Nov 4 23:56:23.252516 kubelet[3586]: I1104 23:56:23.252496 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:23.252946 kubelet[3586]: E1104 23:56:23.252927 3586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:23.253247 systemd[1]: Started cri-containerd-e010041bcdfe645af48a858705be0c2de0b0b73bfaf86e4d13c9c00886d18f80.scope - libcontainer container e010041bcdfe645af48a858705be0c2de0b0b73bfaf86e4d13c9c00886d18f80. Nov 4 23:56:23.263778 containerd[2578]: time="2025-11-04T23:56:23.263742327Z" level=info msg="CreateContainer within sandbox \"4a40819b90cf86e23255c3275753d82b7d71ab8fb1c42d97359b111a3c35ed46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c29abd6f3c9cf2c78d82f3c44f78cbbb4887d920aca0cf403bb1e75a8c346941\"" Nov 4 23:56:23.264527 containerd[2578]: time="2025-11-04T23:56:23.264448326Z" level=info msg="StartContainer for \"c29abd6f3c9cf2c78d82f3c44f78cbbb4887d920aca0cf403bb1e75a8c346941\"" Nov 4 23:56:23.265700 containerd[2578]: time="2025-11-04T23:56:23.265676218Z" level=info msg="connecting to shim c29abd6f3c9cf2c78d82f3c44f78cbbb4887d920aca0cf403bb1e75a8c346941" address="unix:///run/containerd/s/a2167cd2bd051afa9291d1f6eb6dd61419236ac4195c8cb4db5907508f0e11d4" protocol=ttrpc version=3 Nov 4 23:56:23.297110 systemd[1]: Started cri-containerd-c29abd6f3c9cf2c78d82f3c44f78cbbb4887d920aca0cf403bb1e75a8c346941.scope - libcontainer container c29abd6f3c9cf2c78d82f3c44f78cbbb4887d920aca0cf403bb1e75a8c346941. Nov 4 23:56:23.303346 containerd[2578]: time="2025-11-04T23:56:23.303165079Z" level=info msg="StartContainer for \"5f4fa32df2ecd90e474e4c336281f7dce0b5b58609bf4360da90e4f0360d2395\" returns successfully" Nov 4 23:56:23.352225 containerd[2578]: time="2025-11-04T23:56:23.352144329Z" level=info msg="StartContainer for \"e010041bcdfe645af48a858705be0c2de0b0b73bfaf86e4d13c9c00886d18f80\" returns successfully" Nov 4 23:56:23.402467 containerd[2578]: time="2025-11-04T23:56:23.402433492Z" level=info msg="StartContainer for \"c29abd6f3c9cf2c78d82f3c44f78cbbb4887d920aca0cf403bb1e75a8c346941\" returns successfully" Nov 4 23:56:23.537490 kubelet[3586]: E1104 23:56:23.537419 3586 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:23.578059 kubelet[3586]: E1104 23:56:23.578039 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:23.582149 kubelet[3586]: E1104 23:56:23.582131 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:23.584241 kubelet[3586]: E1104 23:56:23.584222 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:24.587469 kubelet[3586]: E1104 23:56:24.587252 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:24.588685 kubelet[3586]: E1104 23:56:24.588075 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:24.589247 kubelet[3586]: E1104 23:56:24.589233 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:25.189719 kubelet[3586]: E1104 23:56:25.189598 3586 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4487.0.0-n-fda2ba6bd5" not found Nov 4 23:56:25.563854 kubelet[3586]: E1104 23:56:25.563754 3586 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4487.0.0-n-fda2ba6bd5" not found Nov 4 23:56:25.589709 kubelet[3586]: E1104 23:56:25.589554 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:25.589709 kubelet[3586]: E1104 23:56:25.589623 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:26.007259 kubelet[3586]: E1104 23:56:26.007228 3586 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4487.0.0-n-fda2ba6bd5" not found Nov 4 23:56:27.259555 kubelet[3586]: E1104 23:56:27.259519 3586 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4487.0.0-n-fda2ba6bd5" not found Nov 4 23:56:27.541616 kubelet[3586]: E1104 23:56:27.541524 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:27.673613 kubelet[3586]: E1104 23:56:27.673557 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:29.119853 kubelet[3586]: E1104 23:56:29.119822 3586 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:30.085718 kubelet[3586]: E1104 23:56:30.085678 3586 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-n-fda2ba6bd5\" not found" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:30.254716 kubelet[3586]: I1104 23:56:30.254690 3586 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:30.713415 kubelet[3586]: I1104 23:56:30.713327 3586 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:30.713415 kubelet[3586]: E1104 23:56:30.713360 3586 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4487.0.0-n-fda2ba6bd5\": node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:30.743378 kubelet[3586]: E1104 23:56:30.743324 3586 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:30.817223 systemd[1]: Reload requested from client PID 3913 ('systemctl') (unit session-9.scope)... Nov 4 23:56:30.817237 systemd[1]: Reloading... Nov 4 23:56:30.843461 kubelet[3586]: E1104 23:56:30.843433 3586 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:30.906110 zram_generator::config[3964]: No configuration found. Nov 4 23:56:30.944518 kubelet[3586]: E1104 23:56:30.944491 3586 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:31.045566 kubelet[3586]: E1104 23:56:31.045491 3586 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:31.096612 systemd[1]: Reloading finished in 279 ms. Nov 4 23:56:31.121344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:31.142898 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:56:31.143151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:31.143203 systemd[1]: kubelet.service: Consumed 628ms CPU time, 123.8M memory peak. Nov 4 23:56:31.145160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:37.762423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:37.770371 (kubelet)[4028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:56:37.859115 kubelet[4028]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:56:37.859115 kubelet[4028]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:56:37.859988 kubelet[4028]: I1104 23:56:37.859073 4028 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:56:37.873120 kubelet[4028]: I1104 23:56:37.872477 4028 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 23:56:37.873235 kubelet[4028]: I1104 23:56:37.873221 4028 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:56:37.873297 kubelet[4028]: I1104 23:56:37.873291 4028 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 23:56:37.873338 kubelet[4028]: I1104 23:56:37.873331 4028 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:56:37.874117 kubelet[4028]: I1104 23:56:37.873594 4028 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:56:37.875975 kubelet[4028]: I1104 23:56:37.875939 4028 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:56:37.880812 kubelet[4028]: I1104 23:56:37.880795 4028 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:56:37.889941 kubelet[4028]: I1104 23:56:37.889927 4028 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:56:37.895463 kubelet[4028]: I1104 23:56:37.895448 4028 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 23:56:37.895782 kubelet[4028]: I1104 23:56:37.895750 4028 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:56:37.896008 kubelet[4028]: I1104 23:56:37.895842 4028 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-fda2ba6bd5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:56:37.896169 kubelet[4028]: I1104 23:56:37.896160 4028 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:56:37.896318 kubelet[4028]: I1104 23:56:37.896244 4028 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 23:56:37.896389 kubelet[4028]: I1104 23:56:37.896383 4028 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 23:56:37.898139 kubelet[4028]: I1104 23:56:37.898124 4028 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:37.898687 kubelet[4028]: I1104 23:56:37.898633 4028 kubelet.go:475] "Attempting to sync node with API server" Nov 4 23:56:37.898687 kubelet[4028]: I1104 23:56:37.898660 4028 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:56:37.902138 kubelet[4028]: I1104 23:56:37.900307 4028 kubelet.go:387] "Adding apiserver pod source" Nov 4 23:56:37.902138 kubelet[4028]: I1104 23:56:37.900344 4028 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:56:37.906839 kubelet[4028]: I1104 23:56:37.906813 4028 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:56:37.909124 kubelet[4028]: I1104 23:56:37.908481 4028 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:56:37.909124 kubelet[4028]: I1104 23:56:37.908521 4028 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 23:56:37.912798 kubelet[4028]: I1104 23:56:37.912743 4028 server.go:1262] "Started kubelet" Nov 4 23:56:37.921490 kubelet[4028]: I1104 23:56:37.921463 4028 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:56:37.926388 kubelet[4028]: I1104 23:56:37.926372 4028 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:56:37.930320 kubelet[4028]: I1104 23:56:37.930258 4028 server.go:310] "Adding debug handlers to kubelet server" Nov 4 23:56:37.933897 kubelet[4028]: I1104 23:56:37.932871 4028 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:56:37.934082 kubelet[4028]: I1104 23:56:37.934065 4028 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 23:56:37.934356 kubelet[4028]: I1104 23:56:37.934320 4028 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:56:37.938106 kubelet[4028]: I1104 23:56:37.937789 4028 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:56:37.938221 kubelet[4028]: I1104 23:56:37.938213 4028 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 23:56:37.938406 kubelet[4028]: E1104 23:56:37.938397 4028 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-fda2ba6bd5\" not found" Nov 4 23:56:37.944122 kubelet[4028]: I1104 23:56:37.943853 4028 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 23:56:37.944438 kubelet[4028]: I1104 23:56:37.944285 4028 reconciler.go:29] "Reconciler: start to sync state" Nov 4 23:56:37.946876 kubelet[4028]: I1104 23:56:37.946823 4028 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:56:37.948110 kubelet[4028]: I1104 23:56:37.948021 4028 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 23:56:37.950706 kubelet[4028]: I1104 23:56:37.950691 4028 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 23:56:37.950777 kubelet[4028]: I1104 23:56:37.950771 4028 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 23:56:37.950840 kubelet[4028]: I1104 23:56:37.950835 4028 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 23:56:37.950920 kubelet[4028]: E1104 23:56:37.950906 4028 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:56:37.953408 kubelet[4028]: E1104 23:56:37.953387 4028 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:56:37.954112 kubelet[4028]: I1104 23:56:37.953637 4028 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:56:37.954112 kubelet[4028]: I1104 23:56:37.953648 4028 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:56:37.996989 kubelet[4028]: I1104 23:56:37.996970 4028 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:56:37.997058 kubelet[4028]: I1104 23:56:37.997009 4028 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:56:37.997058 kubelet[4028]: I1104 23:56:37.997023 4028 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:37.997311 kubelet[4028]: I1104 23:56:37.997150 4028 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:56:37.997311 kubelet[4028]: I1104 23:56:37.997160 4028 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:56:37.997311 kubelet[4028]: I1104 23:56:37.997174 4028 policy_none.go:49] "None policy: Start" Nov 4 23:56:37.997311 kubelet[4028]: I1104 23:56:37.997200 4028 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 23:56:37.997311 kubelet[4028]: I1104 23:56:37.997211 4028 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 23:56:37.997311 kubelet[4028]: I1104 23:56:37.997307 4028 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 23:56:37.997311 kubelet[4028]: I1104 23:56:37.997314 4028 policy_none.go:47] "Start" Nov 4 23:56:38.000648 kubelet[4028]: E1104 23:56:38.000628 4028 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:56:38.000754 kubelet[4028]: I1104 23:56:38.000742 4028 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:56:38.000798 kubelet[4028]: I1104 23:56:38.000753 4028 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:56:38.001407 kubelet[4028]: I1104 23:56:38.001294 4028 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:56:38.003210 kubelet[4028]: E1104 23:56:38.003196 4028 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:56:38.051950 kubelet[4028]: I1104 23:56:38.051869 4028 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.052670 kubelet[4028]: I1104 23:56:38.051873 4028 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.054687 kubelet[4028]: I1104 23:56:38.054662 4028 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.069059 kubelet[4028]: I1104 23:56:38.069041 4028 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:56:38.069245 kubelet[4028]: I1104 23:56:38.069168 4028 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:56:38.070401 kubelet[4028]: I1104 23:56:38.070380 4028 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:56:38.103015 kubelet[4028]: I1104 23:56:38.102996 4028 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.110659 kubelet[4028]: I1104 23:56:38.110629 4028 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.110760 kubelet[4028]: I1104 23:56:38.110677 4028 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.110760 kubelet[4028]: I1104 23:56:38.110693 4028 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:56:38.111063 containerd[2578]: time="2025-11-04T23:56:38.111030665Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:56:38.111362 kubelet[4028]: I1104 23:56:38.111203 4028 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:56:38.145549 kubelet[4028]: I1104 23:56:38.145530 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145743 kubelet[4028]: I1104 23:56:38.145553 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145743 kubelet[4028]: I1104 23:56:38.145569 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e89b2dc533c3ff0ee581b5c654a027ac-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"e89b2dc533c3ff0ee581b5c654a027ac\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145743 kubelet[4028]: I1104 23:56:38.145582 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e89b2dc533c3ff0ee581b5c654a027ac-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"e89b2dc533c3ff0ee581b5c654a027ac\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145743 kubelet[4028]: I1104 23:56:38.145598 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145743 kubelet[4028]: I1104 23:56:38.145623 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22b357fb7d27d32e956af85a79b45791-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"22b357fb7d27d32e956af85a79b45791\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145831 kubelet[4028]: I1104 23:56:38.145648 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e89b2dc533c3ff0ee581b5c654a027ac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"e89b2dc533c3ff0ee581b5c654a027ac\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145831 kubelet[4028]: I1104 23:56:38.145677 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.145831 kubelet[4028]: I1104 23:56:38.145689 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7655705bc9a47fa2907590f65f7426b-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5\" (UID: \"b7655705bc9a47fa2907590f65f7426b\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.376499 sudo[4063]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 23:56:38.376749 sudo[4063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 23:56:38.700487 sudo[4063]: pam_unix(sudo:session): session closed for user root Nov 4 23:56:38.904380 kubelet[4028]: I1104 23:56:38.904163 4028 apiserver.go:52] "Watching apiserver" Nov 4 23:56:38.926146 systemd[1]: Created slice kubepods-besteffort-pod690f15c7_8b55_4db4_b32b_e575682e3ac8.slice - libcontainer container kubepods-besteffort-pod690f15c7_8b55_4db4_b32b_e575682e3ac8.slice. Nov 4 23:56:38.944782 kubelet[4028]: I1104 23:56:38.944684 4028 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 23:56:38.951643 kubelet[4028]: I1104 23:56:38.951568 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/690f15c7-8b55-4db4-b32b-e575682e3ac8-kube-proxy\") pod \"kube-proxy-nf7cb\" (UID: \"690f15c7-8b55-4db4-b32b-e575682e3ac8\") " pod="kube-system/kube-proxy-nf7cb" Nov 4 23:56:38.951805 kubelet[4028]: I1104 23:56:38.951784 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/690f15c7-8b55-4db4-b32b-e575682e3ac8-xtables-lock\") pod \"kube-proxy-nf7cb\" (UID: \"690f15c7-8b55-4db4-b32b-e575682e3ac8\") " pod="kube-system/kube-proxy-nf7cb" Nov 4 23:56:38.951893 kubelet[4028]: I1104 23:56:38.951884 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x545d\" (UniqueName: \"kubernetes.io/projected/690f15c7-8b55-4db4-b32b-e575682e3ac8-kube-api-access-x545d\") pod \"kube-proxy-nf7cb\" (UID: \"690f15c7-8b55-4db4-b32b-e575682e3ac8\") " pod="kube-system/kube-proxy-nf7cb" Nov 4 23:56:38.952010 kubelet[4028]: I1104 23:56:38.951962 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/690f15c7-8b55-4db4-b32b-e575682e3ac8-lib-modules\") pod \"kube-proxy-nf7cb\" (UID: \"690f15c7-8b55-4db4-b32b-e575682e3ac8\") " pod="kube-system/kube-proxy-nf7cb" Nov 4 23:56:38.955157 kubelet[4028]: I1104 23:56:38.955069 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" podStartSLOduration=0.955038431 podStartE2EDuration="955.038431ms" podCreationTimestamp="2025-11-04 23:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:38.943690932 +0000 UTC m=+1.169830592" watchObservedRunningTime="2025-11-04 23:56:38.955038431 +0000 UTC m=+1.181178080" Nov 4 23:56:38.956040 kubelet[4028]: I1104 23:56:38.955868 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-n-fda2ba6bd5" podStartSLOduration=0.955857676 podStartE2EDuration="955.857676ms" podCreationTimestamp="2025-11-04 23:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:38.9558586 +0000 UTC m=+1.181998270" watchObservedRunningTime="2025-11-04 23:56:38.955857676 +0000 UTC m=+1.181997329" Nov 4 23:56:38.977754 kubelet[4028]: I1104 23:56:38.977694 4028 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:38.984261 kubelet[4028]: I1104 23:56:38.984119 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-fda2ba6bd5" podStartSLOduration=0.984078925 podStartE2EDuration="984.078925ms" podCreationTimestamp="2025-11-04 23:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:38.966223811 +0000 UTC m=+1.192363466" watchObservedRunningTime="2025-11-04 23:56:38.984078925 +0000 UTC m=+1.210218584" Nov 4 23:56:38.992861 kubelet[4028]: I1104 23:56:38.992837 4028 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:56:38.992991 kubelet[4028]: E1104 23:56:38.992982 4028 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-fda2ba6bd5\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-n-fda2ba6bd5" Nov 4 23:56:39.456036 kubelet[4028]: I1104 23:56:39.455418 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-net\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456036 kubelet[4028]: I1104 23:56:39.455453 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456036 kubelet[4028]: I1104 23:56:39.455474 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-run\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456036 kubelet[4028]: I1104 23:56:39.455486 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-cgroup\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456036 kubelet[4028]: I1104 23:56:39.455498 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-etc-cni-netd\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456036 kubelet[4028]: I1104 23:56:39.455511 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s86sn\" (UniqueName: \"kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-kube-api-access-s86sn\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456275 kubelet[4028]: I1104 23:56:39.455528 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hostproc\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456275 kubelet[4028]: I1104 23:56:39.455566 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cni-path\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456275 kubelet[4028]: I1104 23:56:39.455607 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456275 kubelet[4028]: I1104 23:56:39.455636 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-kernel\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456275 kubelet[4028]: I1104 23:56:39.455652 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-bpf-maps\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456275 kubelet[4028]: I1104 23:56:39.455665 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-lib-modules\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456377 kubelet[4028]: I1104 23:56:39.455681 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-xtables-lock\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.456447 kubelet[4028]: I1104 23:56:39.456429 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path\") pod \"cilium-kqkr9\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " pod="kube-system/cilium-kqkr9" Nov 4 23:56:39.557682 kubelet[4028]: E1104 23:56:39.557246 4028 configmap.go:193] Couldn't get configMap kube-system/cilium-config: object "kube-system"/"cilium-config" not registered Nov 4 23:56:39.557682 kubelet[4028]: E1104 23:56:39.557321 4028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path podName:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7 nodeName:}" failed. No retries permitted until 2025-11-04 23:56:40.057299982 +0000 UTC m=+2.283439639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path") pod "cilium-kqkr9" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7") : object "kube-system"/"cilium-config" not registered Nov 4 23:56:39.557682 kubelet[4028]: E1104 23:56:39.557261 4028 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: object "kube-system"/"cilium-clustermesh" not registered Nov 4 23:56:39.557682 kubelet[4028]: E1104 23:56:39.557613 4028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets podName:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7 nodeName:}" failed. No retries permitted until 2025-11-04 23:56:40.057597461 +0000 UTC m=+2.283737116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets") pod "cilium-kqkr9" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7") : object "kube-system"/"cilium-clustermesh" not registered Nov 4 23:56:39.557682 kubelet[4028]: E1104 23:56:39.557552 4028 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: object "kube-system"/"hubble-server-certs" not registered Nov 4 23:56:39.557682 kubelet[4028]: E1104 23:56:39.557626 4028 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-kqkr9: object "kube-system"/"hubble-server-certs" not registered Nov 4 23:56:39.558006 kubelet[4028]: E1104 23:56:39.557649 4028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls podName:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7 nodeName:}" failed. No retries permitted until 2025-11-04 23:56:40.057642386 +0000 UTC m=+2.283782030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls") pod "cilium-kqkr9" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7") : object "kube-system"/"hubble-server-certs" not registered Nov 4 23:56:40.500684 kubelet[4028]: E1104 23:56:40.060237 4028 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: object "kube-system"/"hubble-server-certs" not registered Nov 4 23:56:40.500684 kubelet[4028]: E1104 23:56:40.060256 4028 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-kqkr9: object "kube-system"/"hubble-server-certs" not registered Nov 4 23:56:40.500684 kubelet[4028]: E1104 23:56:40.060272 4028 configmap.go:193] Couldn't get configMap kube-system/cilium-config: object "kube-system"/"cilium-config" not registered Nov 4 23:56:40.500684 kubelet[4028]: E1104 23:56:40.060236 4028 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: object "kube-system"/"cilium-clustermesh" not registered Nov 4 23:56:40.500684 kubelet[4028]: E1104 23:56:40.060302 4028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls podName:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7 nodeName:}" failed. No retries permitted until 2025-11-04 23:56:41.06028796 +0000 UTC m=+3.286427613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls") pod "cilium-kqkr9" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7") : object "kube-system"/"hubble-server-certs" not registered Nov 4 23:56:40.500684 kubelet[4028]: E1104 23:56:40.060315 4028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets podName:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7 nodeName:}" failed. No retries permitted until 2025-11-04 23:56:41.060306448 +0000 UTC m=+3.286446102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets") pod "cilium-kqkr9" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7") : object "kube-system"/"cilium-clustermesh" not registered Nov 4 23:56:40.501242 kubelet[4028]: E1104 23:56:40.060325 4028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path podName:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7 nodeName:}" failed. No retries permitted until 2025-11-04 23:56:41.060319669 +0000 UTC m=+3.286459311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path") pod "cilium-kqkr9" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7") : object "kube-system"/"cilium-config" not registered Nov 4 23:56:40.502394 containerd[2578]: time="2025-11-04T23:56:40.502328446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nf7cb,Uid:690f15c7-8b55-4db4-b32b-e575682e3ac8,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:40.511725 systemd[1]: Created slice kubepods-burstable-pod50a433a6_aa81_458f_9e5f_1b9c98d0c7c7.slice - libcontainer container kubepods-burstable-pod50a433a6_aa81_458f_9e5f_1b9c98d0c7c7.slice. Nov 4 23:56:40.563782 kubelet[4028]: I1104 23:56:40.563752 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwswv\" (UniqueName: \"kubernetes.io/projected/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-kube-api-access-wwswv\") pod \"cilium-operator-6f9c7c5859-k28jv\" (UID: \"f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16\") " pod="kube-system/cilium-operator-6f9c7c5859-k28jv" Nov 4 23:56:40.563939 kubelet[4028]: I1104 23:56:40.563811 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-k28jv\" (UID: \"f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16\") " pod="kube-system/cilium-operator-6f9c7c5859-k28jv" Nov 4 23:56:41.480637 systemd[1]: Created slice kubepods-besteffort-podf56fbbfb_10dc_44b1_b5d0_bedfda2eaa16.slice - libcontainer container kubepods-besteffort-podf56fbbfb_10dc_44b1_b5d0_bedfda2eaa16.slice. Nov 4 23:56:44.463758 containerd[2578]: time="2025-11-04T23:56:44.463695242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-k28jv,Uid:f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:44.969438 containerd[2578]: time="2025-11-04T23:56:44.969279148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqkr9,Uid:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:45.153282 containerd[2578]: time="2025-11-04T23:56:45.153228944Z" level=info msg="connecting to shim 3c0f8e88112f61a179388486290700544fcbf39d533c8078f89a9b25ade3967b" address="unix:///run/containerd/s/d033de7b121dc1a0697818e49394819a39cc0067bedb746b52e322f0166a2084" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:45.176363 containerd[2578]: time="2025-11-04T23:56:45.176312460Z" level=info msg="connecting to shim ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd" address="unix:///run/containerd/s/9146dfcde77747ea62d4fcd2d202f605ff225d83a4e0680b9927a64520c73f9b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:45.191259 systemd[1]: Started cri-containerd-3c0f8e88112f61a179388486290700544fcbf39d533c8078f89a9b25ade3967b.scope - libcontainer container 3c0f8e88112f61a179388486290700544fcbf39d533c8078f89a9b25ade3967b. Nov 4 23:56:45.198845 containerd[2578]: time="2025-11-04T23:56:45.198229874Z" level=info msg="connecting to shim f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec" address="unix:///run/containerd/s/124966422328bf95144655d30878677eaaf4838453ff30a83da12c9c8af8ac14" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:45.222307 systemd[1]: Started cri-containerd-ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd.scope - libcontainer container ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd. Nov 4 23:56:45.226809 systemd[1]: Started cri-containerd-f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec.scope - libcontainer container f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec. Nov 4 23:56:45.243693 containerd[2578]: time="2025-11-04T23:56:45.243658817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nf7cb,Uid:690f15c7-8b55-4db4-b32b-e575682e3ac8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c0f8e88112f61a179388486290700544fcbf39d533c8078f89a9b25ade3967b\"" Nov 4 23:56:45.259021 containerd[2578]: time="2025-11-04T23:56:45.258845975Z" level=info msg="CreateContainer within sandbox \"3c0f8e88112f61a179388486290700544fcbf39d533c8078f89a9b25ade3967b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:56:45.262208 containerd[2578]: time="2025-11-04T23:56:45.262184018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqkr9,Uid:50a433a6-aa81-458f-9e5f-1b9c98d0c7c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\"" Nov 4 23:56:45.266440 containerd[2578]: time="2025-11-04T23:56:45.266401484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 23:56:45.290727 containerd[2578]: time="2025-11-04T23:56:45.290698526Z" level=info msg="Container cfff54d71ac6075e7cf2c9e97a3e258449206a7b171f83cea3ce545042c7f78d: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:45.293189 containerd[2578]: time="2025-11-04T23:56:45.293162350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-k28jv,Uid:f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\"" Nov 4 23:56:45.307979 containerd[2578]: time="2025-11-04T23:56:45.307939074Z" level=info msg="CreateContainer within sandbox \"3c0f8e88112f61a179388486290700544fcbf39d533c8078f89a9b25ade3967b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cfff54d71ac6075e7cf2c9e97a3e258449206a7b171f83cea3ce545042c7f78d\"" Nov 4 23:56:45.309311 containerd[2578]: time="2025-11-04T23:56:45.309253797Z" level=info msg="StartContainer for \"cfff54d71ac6075e7cf2c9e97a3e258449206a7b171f83cea3ce545042c7f78d\"" Nov 4 23:56:45.312508 containerd[2578]: time="2025-11-04T23:56:45.312469149Z" level=info msg="connecting to shim cfff54d71ac6075e7cf2c9e97a3e258449206a7b171f83cea3ce545042c7f78d" address="unix:///run/containerd/s/d033de7b121dc1a0697818e49394819a39cc0067bedb746b52e322f0166a2084" protocol=ttrpc version=3 Nov 4 23:56:45.338250 systemd[1]: Started cri-containerd-cfff54d71ac6075e7cf2c9e97a3e258449206a7b171f83cea3ce545042c7f78d.scope - libcontainer container cfff54d71ac6075e7cf2c9e97a3e258449206a7b171f83cea3ce545042c7f78d. Nov 4 23:56:45.373267 containerd[2578]: time="2025-11-04T23:56:45.373233585Z" level=info msg="StartContainer for \"cfff54d71ac6075e7cf2c9e97a3e258449206a7b171f83cea3ce545042c7f78d\" returns successfully" Nov 4 23:56:46.998542 kubelet[4028]: I1104 23:56:46.997957 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nf7cb" podStartSLOduration=9.997937899 podStartE2EDuration="9.997937899s" podCreationTimestamp="2025-11-04 23:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:46.008925862 +0000 UTC m=+8.235065518" watchObservedRunningTime="2025-11-04 23:56:46.997937899 +0000 UTC m=+9.224077560" Nov 4 23:56:50.058419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655603574.mount: Deactivated successfully. Nov 4 23:56:51.981497 containerd[2578]: time="2025-11-04T23:56:51.981441250Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:51.986237 containerd[2578]: time="2025-11-04T23:56:51.986201559Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 4 23:56:51.990188 containerd[2578]: time="2025-11-04T23:56:51.990156575Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:51.991476 containerd[2578]: time="2025-11-04T23:56:51.991390178Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.7249582s" Nov 4 23:56:51.991616 containerd[2578]: time="2025-11-04T23:56:51.991420520Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 4 23:56:51.992513 containerd[2578]: time="2025-11-04T23:56:51.992487712Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 23:56:52.001404 containerd[2578]: time="2025-11-04T23:56:52.001377110Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:56:52.023351 containerd[2578]: time="2025-11-04T23:56:52.023307197Z" level=info msg="Container ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:52.025891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123225099.mount: Deactivated successfully. Nov 4 23:56:52.040406 containerd[2578]: time="2025-11-04T23:56:52.040379722Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\"" Nov 4 23:56:52.040860 containerd[2578]: time="2025-11-04T23:56:52.040761667Z" level=info msg="StartContainer for \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\"" Nov 4 23:56:52.041890 containerd[2578]: time="2025-11-04T23:56:52.041791582Z" level=info msg="connecting to shim ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3" address="unix:///run/containerd/s/9146dfcde77747ea62d4fcd2d202f605ff225d83a4e0680b9927a64520c73f9b" protocol=ttrpc version=3 Nov 4 23:56:52.066257 systemd[1]: Started cri-containerd-ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3.scope - libcontainer container ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3. Nov 4 23:56:52.093176 containerd[2578]: time="2025-11-04T23:56:52.092435279Z" level=info msg="StartContainer for \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" returns successfully" Nov 4 23:56:52.099176 systemd[1]: cri-containerd-ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3.scope: Deactivated successfully. Nov 4 23:56:52.102425 containerd[2578]: time="2025-11-04T23:56:52.102392888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" pid:4427 exited_at:{seconds:1762300612 nanos:101791307}" Nov 4 23:56:52.102499 containerd[2578]: time="2025-11-04T23:56:52.102448230Z" level=info msg="received exit event container_id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" pid:4427 exited_at:{seconds:1762300612 nanos:101791307}" Nov 4 23:56:52.117059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3-rootfs.mount: Deactivated successfully. Nov 4 23:57:02.103279 containerd[2578]: time="2025-11-04T23:57:02.103228000Z" level=error msg="failed to handle container TaskExit event container_id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" pid:4427 exited_at:{seconds:1762300612 nanos:101791307}" error="failed to stop container: failed to delete task: context deadline exceeded" Nov 4 23:57:04.077164 containerd[2578]: time="2025-11-04T23:57:04.077119007Z" level=info msg="TaskExit event container_id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" id:\"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" pid:4427 exited_at:{seconds:1762300612 nanos:101791307}" Nov 4 23:57:06.077514 containerd[2578]: time="2025-11-04T23:57:06.077475686Z" level=error msg="get state for ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3" error="context deadline exceeded" Nov 4 23:57:06.077514 containerd[2578]: time="2025-11-04T23:57:06.077503697Z" level=warning msg="unknown status" status=0 Nov 4 23:57:08.078362 containerd[2578]: time="2025-11-04T23:57:08.078315567Z" level=error msg="get state for ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3" error="context deadline exceeded" Nov 4 23:57:08.078362 containerd[2578]: time="2025-11-04T23:57:08.078348925Z" level=warning msg="unknown status" status=0 Nov 4 23:57:10.079969 containerd[2578]: time="2025-11-04T23:57:10.079797895Z" level=error msg="get state for ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3" error="context deadline exceeded" Nov 4 23:57:10.079969 containerd[2578]: time="2025-11-04T23:57:10.079849945Z" level=warning msg="unknown status" status=0 Nov 4 23:57:11.269186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356155290.mount: Deactivated successfully. Nov 4 23:57:11.507114 containerd[2578]: time="2025-11-04T23:57:11.506412032Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Nov 4 23:57:11.507114 containerd[2578]: time="2025-11-04T23:57:11.506564018Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Nov 4 23:57:11.507114 containerd[2578]: time="2025-11-04T23:57:11.506574375Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Nov 4 23:57:11.507114 containerd[2578]: time="2025-11-04T23:57:11.506580274Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Nov 4 23:57:11.507814 containerd[2578]: time="2025-11-04T23:57:11.507782810Z" level=info msg="Ensure that container ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3 in task-service has been cleanup successfully" Nov 4 23:57:12.110981 containerd[2578]: time="2025-11-04T23:57:12.110857367Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:57:12.369008 containerd[2578]: time="2025-11-04T23:57:12.367848226Z" level=info msg="Container 6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:12.371852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492271192.mount: Deactivated successfully. Nov 4 23:57:12.465988 containerd[2578]: time="2025-11-04T23:57:12.465773553Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\"" Nov 4 23:57:12.467921 containerd[2578]: time="2025-11-04T23:57:12.466467318Z" level=info msg="StartContainer for \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\"" Nov 4 23:57:12.468130 containerd[2578]: time="2025-11-04T23:57:12.467965385Z" level=info msg="connecting to shim 6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1" address="unix:///run/containerd/s/9146dfcde77747ea62d4fcd2d202f605ff225d83a4e0680b9927a64520c73f9b" protocol=ttrpc version=3 Nov 4 23:57:12.496328 systemd[1]: Started cri-containerd-6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1.scope - libcontainer container 6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1. Nov 4 23:57:12.540872 containerd[2578]: time="2025-11-04T23:57:12.540833881Z" level=info msg="StartContainer for \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" returns successfully" Nov 4 23:57:12.552779 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:57:12.553011 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:57:12.553059 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:57:12.554843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:57:12.558832 systemd[1]: cri-containerd-6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1.scope: Deactivated successfully. Nov 4 23:57:12.559841 containerd[2578]: time="2025-11-04T23:57:12.559821275Z" level=info msg="received exit event container_id:\"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" id:\"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" pid:4487 exited_at:{seconds:1762300632 nanos:559388595}" Nov 4 23:57:12.560117 containerd[2578]: time="2025-11-04T23:57:12.559939643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" id:\"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" pid:4487 exited_at:{seconds:1762300632 nanos:559388595}" Nov 4 23:57:12.579168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:57:13.366907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1-rootfs.mount: Deactivated successfully. Nov 4 23:57:17.115707 containerd[2578]: time="2025-11-04T23:57:17.115662121Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:57:17.162520 containerd[2578]: time="2025-11-04T23:57:17.162473191Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:17.208114 containerd[2578]: time="2025-11-04T23:57:17.208061640Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 4 23:57:17.257414 containerd[2578]: time="2025-11-04T23:57:17.257328220Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:17.304890 containerd[2578]: time="2025-11-04T23:57:17.304844483Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 25.312310479s" Nov 4 23:57:17.304890 containerd[2578]: time="2025-11-04T23:57:17.304889317Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 4 23:57:17.415953 containerd[2578]: time="2025-11-04T23:57:17.415874071Z" level=info msg="CreateContainer within sandbox \"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 23:57:17.463735 containerd[2578]: time="2025-11-04T23:57:17.463707573Z" level=info msg="Container 3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:17.721309 containerd[2578]: time="2025-11-04T23:57:17.721218179Z" level=info msg="Container 965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:17.810169 containerd[2578]: time="2025-11-04T23:57:17.810140521Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\"" Nov 4 23:57:17.810891 containerd[2578]: time="2025-11-04T23:57:17.810862808Z" level=info msg="StartContainer for \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\"" Nov 4 23:57:17.812403 containerd[2578]: time="2025-11-04T23:57:17.812371890Z" level=info msg="connecting to shim 3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6" address="unix:///run/containerd/s/9146dfcde77747ea62d4fcd2d202f605ff225d83a4e0680b9927a64520c73f9b" protocol=ttrpc version=3 Nov 4 23:57:17.832234 systemd[1]: Started cri-containerd-3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6.scope - libcontainer container 3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6. Nov 4 23:57:17.861064 systemd[1]: cri-containerd-3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6.scope: Deactivated successfully. Nov 4 23:57:17.863835 containerd[2578]: time="2025-11-04T23:57:17.863810101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" id:\"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" pid:4540 exited_at:{seconds:1762300637 nanos:863510853}" Nov 4 23:57:17.902316 containerd[2578]: time="2025-11-04T23:57:17.902270921Z" level=info msg="received exit event container_id:\"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" id:\"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" pid:4540 exited_at:{seconds:1762300637 nanos:863510853}" Nov 4 23:57:17.904318 containerd[2578]: time="2025-11-04T23:57:17.904285884Z" level=info msg="CreateContainer within sandbox \"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\"" Nov 4 23:57:17.906107 containerd[2578]: time="2025-11-04T23:57:17.905831746Z" level=info msg="StartContainer for \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\"" Nov 4 23:57:17.908490 containerd[2578]: time="2025-11-04T23:57:17.908455997Z" level=info msg="connecting to shim 965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0" address="unix:///run/containerd/s/124966422328bf95144655d30878677eaaf4838453ff30a83da12c9c8af8ac14" protocol=ttrpc version=3 Nov 4 23:57:17.912822 containerd[2578]: time="2025-11-04T23:57:17.912794867Z" level=info msg="StartContainer for \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" returns successfully" Nov 4 23:57:17.932227 systemd[1]: Started cri-containerd-965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0.scope - libcontainer container 965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0. Nov 4 23:57:18.463711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6-rootfs.mount: Deactivated successfully. Nov 4 23:57:19.012632 containerd[2578]: time="2025-11-04T23:57:19.012497854Z" level=info msg="StartContainer for \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" returns successfully" Nov 4 23:57:19.104796 containerd[2578]: time="2025-11-04T23:57:19.104765887Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:57:19.278116 containerd[2578]: time="2025-11-04T23:57:19.277990023Z" level=info msg="Container ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:19.408054 kubelet[4028]: I1104 23:57:19.407945 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-k28jv" podStartSLOduration=8.39645813 podStartE2EDuration="40.407930312s" podCreationTimestamp="2025-11-04 23:56:39 +0000 UTC" firstStartedPulling="2025-11-04 23:56:45.294040921 +0000 UTC m=+7.520180563" lastFinishedPulling="2025-11-04 23:57:17.305513099 +0000 UTC m=+39.531652745" observedRunningTime="2025-11-04 23:57:19.250761229 +0000 UTC m=+41.476900880" watchObservedRunningTime="2025-11-04 23:57:19.407930312 +0000 UTC m=+41.634069969" Nov 4 23:57:19.411504 containerd[2578]: time="2025-11-04T23:57:19.411456659Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\"" Nov 4 23:57:19.414129 containerd[2578]: time="2025-11-04T23:57:19.412798191Z" level=info msg="StartContainer for \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\"" Nov 4 23:57:19.414129 containerd[2578]: time="2025-11-04T23:57:19.413560903Z" level=info msg="connecting to shim ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a" address="unix:///run/containerd/s/9146dfcde77747ea62d4fcd2d202f605ff225d83a4e0680b9927a64520c73f9b" protocol=ttrpc version=3 Nov 4 23:57:19.446539 systemd[1]: Started cri-containerd-ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a.scope - libcontainer container ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a. Nov 4 23:57:19.497205 systemd[1]: cri-containerd-ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a.scope: Deactivated successfully. Nov 4 23:57:19.500929 containerd[2578]: time="2025-11-04T23:57:19.499264329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" id:\"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" pid:4614 exited_at:{seconds:1762300639 nanos:499009442}" Nov 4 23:57:19.503317 containerd[2578]: time="2025-11-04T23:57:19.503226509Z" level=info msg="received exit event container_id:\"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" id:\"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" pid:4614 exited_at:{seconds:1762300639 nanos:499009442}" Nov 4 23:57:19.517262 containerd[2578]: time="2025-11-04T23:57:19.517228388Z" level=info msg="StartContainer for \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" returns successfully" Nov 4 23:57:19.534802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a-rootfs.mount: Deactivated successfully. Nov 4 23:57:21.087935 containerd[2578]: time="2025-11-04T23:57:21.087228671Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 23:57:21.217324 containerd[2578]: time="2025-11-04T23:57:21.217225939Z" level=info msg="Container aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:21.220849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259115620.mount: Deactivated successfully. Nov 4 23:57:21.310042 containerd[2578]: time="2025-11-04T23:57:21.310012699Z" level=info msg="CreateContainer within sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\"" Nov 4 23:57:21.310958 containerd[2578]: time="2025-11-04T23:57:21.310743049Z" level=info msg="StartContainer for \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\"" Nov 4 23:57:21.312052 containerd[2578]: time="2025-11-04T23:57:21.311947906Z" level=info msg="connecting to shim aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299" address="unix:///run/containerd/s/9146dfcde77747ea62d4fcd2d202f605ff225d83a4e0680b9927a64520c73f9b" protocol=ttrpc version=3 Nov 4 23:57:21.334243 systemd[1]: Started cri-containerd-aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299.scope - libcontainer container aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299. Nov 4 23:57:21.365872 containerd[2578]: time="2025-11-04T23:57:21.365751297Z" level=info msg="StartContainer for \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" returns successfully" Nov 4 23:57:21.435647 containerd[2578]: time="2025-11-04T23:57:21.435606061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"c2a4821c358a2c7a2bc9370e0acfbb56a413d221a44a2147790765b1936ad3b2\" pid:4684 exited_at:{seconds:1762300641 nanos:434988944}" Nov 4 23:57:21.438839 kubelet[4028]: I1104 23:57:21.438821 4028 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 23:57:21.575625 systemd[1]: Created slice kubepods-burstable-pod9df566f7_3f37_40bc_9881_a07f7eb3701f.slice - libcontainer container kubepods-burstable-pod9df566f7_3f37_40bc_9881_a07f7eb3701f.slice. Nov 4 23:57:21.581000 systemd[1]: Created slice kubepods-burstable-pod7902e35b_c1ac_4fca_b41a_525a696051f6.slice - libcontainer container kubepods-burstable-pod7902e35b_c1ac_4fca_b41a_525a696051f6.slice. Nov 4 23:57:21.621355 kubelet[4028]: I1104 23:57:21.621270 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7902e35b-c1ac-4fca-b41a-525a696051f6-config-volume\") pod \"coredns-66bc5c9577-5dl6g\" (UID: \"7902e35b-c1ac-4fca-b41a-525a696051f6\") " pod="kube-system/coredns-66bc5c9577-5dl6g" Nov 4 23:57:21.621355 kubelet[4028]: I1104 23:57:21.621307 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkmt\" (UniqueName: \"kubernetes.io/projected/9df566f7-3f37-40bc-9881-a07f7eb3701f-kube-api-access-7qkmt\") pod \"coredns-66bc5c9577-8qnwh\" (UID: \"9df566f7-3f37-40bc-9881-a07f7eb3701f\") " pod="kube-system/coredns-66bc5c9577-8qnwh" Nov 4 23:57:21.621355 kubelet[4028]: I1104 23:57:21.621350 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjqs\" (UniqueName: \"kubernetes.io/projected/7902e35b-c1ac-4fca-b41a-525a696051f6-kube-api-access-dzjqs\") pod \"coredns-66bc5c9577-5dl6g\" (UID: \"7902e35b-c1ac-4fca-b41a-525a696051f6\") " pod="kube-system/coredns-66bc5c9577-5dl6g" Nov 4 23:57:21.621491 kubelet[4028]: I1104 23:57:21.621367 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9df566f7-3f37-40bc-9881-a07f7eb3701f-config-volume\") pod \"coredns-66bc5c9577-8qnwh\" (UID: \"9df566f7-3f37-40bc-9881-a07f7eb3701f\") " pod="kube-system/coredns-66bc5c9577-8qnwh" Nov 4 23:57:21.885646 containerd[2578]: time="2025-11-04T23:57:21.885551830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8qnwh,Uid:9df566f7-3f37-40bc-9881-a07f7eb3701f,Namespace:kube-system,Attempt:0,}" Nov 4 23:57:21.909303 containerd[2578]: time="2025-11-04T23:57:21.909271122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5dl6g,Uid:7902e35b-c1ac-4fca-b41a-525a696051f6,Namespace:kube-system,Attempt:0,}" Nov 4 23:57:23.127044 containerd[2578]: time="2025-11-04T23:57:23.127000328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"38067a9ef1356bf2967513f72daed13775a9aeac53fcf9246ed0a400be946654\" pid:4787 exit_status:1 exited_at:{seconds:1762300643 nanos:126566184}" Nov 4 23:57:23.621601 systemd-networkd[2207]: cilium_host: Link UP Nov 4 23:57:23.623731 systemd-networkd[2207]: cilium_net: Link UP Nov 4 23:57:23.623891 systemd-networkd[2207]: cilium_host: Gained carrier Nov 4 23:57:23.624002 systemd-networkd[2207]: cilium_net: Gained carrier Nov 4 23:57:23.723189 systemd-networkd[2207]: cilium_host: Gained IPv6LL Nov 4 23:57:23.790695 systemd-networkd[2207]: cilium_vxlan: Link UP Nov 4 23:57:23.790701 systemd-networkd[2207]: cilium_vxlan: Gained carrier Nov 4 23:57:23.819203 systemd-networkd[2207]: cilium_net: Gained IPv6LL Nov 4 23:57:24.060117 kernel: NET: Registered PF_ALG protocol family Nov 4 23:57:24.722558 systemd-networkd[2207]: lxc_health: Link UP Nov 4 23:57:24.736271 systemd-networkd[2207]: lxc_health: Gained carrier Nov 4 23:57:24.988704 systemd-networkd[2207]: lxc6bdcf284b3ea: Link UP Nov 4 23:57:24.989302 kernel: eth0: renamed from tmp9c0b1 Nov 4 23:57:24.991387 systemd-networkd[2207]: lxc6bdcf284b3ea: Gained carrier Nov 4 23:57:25.011173 systemd-networkd[2207]: cilium_vxlan: Gained IPv6LL Nov 4 23:57:25.047115 kernel: eth0: renamed from tmp31523 Nov 4 23:57:25.047539 systemd-networkd[2207]: lxcbc32e23df4cb: Link UP Nov 4 23:57:25.049238 systemd-networkd[2207]: lxcbc32e23df4cb: Gained carrier Nov 4 23:57:25.280269 containerd[2578]: time="2025-11-04T23:57:25.279626257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"a08a04081aaa0e0fbd9d49718cdafd0a0736100851a025146aa56910ec8d62ff\" pid:5174 exited_at:{seconds:1762300645 nanos:278135526}" Nov 4 23:57:25.744142 kubelet[4028]: I1104 23:57:25.744066 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kqkr9" podStartSLOduration=40.017745087 podStartE2EDuration="46.744048216s" podCreationTimestamp="2025-11-04 23:56:39 +0000 UTC" firstStartedPulling="2025-11-04 23:56:45.266060693 +0000 UTC m=+7.492200349" lastFinishedPulling="2025-11-04 23:56:51.992363835 +0000 UTC m=+14.218503478" observedRunningTime="2025-11-04 23:57:22.099448193 +0000 UTC m=+44.325587869" watchObservedRunningTime="2025-11-04 23:57:25.744048216 +0000 UTC m=+47.970187875" Nov 4 23:57:26.419243 systemd-networkd[2207]: lxcbc32e23df4cb: Gained IPv6LL Nov 4 23:57:26.611254 systemd-networkd[2207]: lxc_health: Gained IPv6LL Nov 4 23:57:26.867370 systemd-networkd[2207]: lxc6bdcf284b3ea: Gained IPv6LL Nov 4 23:57:27.391969 containerd[2578]: time="2025-11-04T23:57:27.391921654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"adc194f65b2e09184b477aaadea93e20a3d291384f3e4edc3a734612780fe97e\" pid:5209 exited_at:{seconds:1762300647 nanos:391584591}" Nov 4 23:57:28.566529 containerd[2578]: time="2025-11-04T23:57:28.566483329Z" level=info msg="connecting to shim 9c0b1957b7ea3f2ddcefdef874f98512053a06aee2201fa9835d1d4626cd968a" address="unix:///run/containerd/s/8573c71607bf91429529c06811c5a75c84a1d28cb95f95fa947715aa8edf5ed1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:57:28.588332 systemd[1]: Started cri-containerd-9c0b1957b7ea3f2ddcefdef874f98512053a06aee2201fa9835d1d4626cd968a.scope - libcontainer container 9c0b1957b7ea3f2ddcefdef874f98512053a06aee2201fa9835d1d4626cd968a. Nov 4 23:57:28.615782 containerd[2578]: time="2025-11-04T23:57:28.615743808Z" level=info msg="connecting to shim 31523bb9c15ba7f370a085d7e052a2ad0da7c94172a1e75de1aae5f391e45f43" address="unix:///run/containerd/s/1f30f9a482549dc8fcd59654b89e35d3f924c6468383ff4d2675891905a205e8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:57:28.646227 systemd[1]: Started cri-containerd-31523bb9c15ba7f370a085d7e052a2ad0da7c94172a1e75de1aae5f391e45f43.scope - libcontainer container 31523bb9c15ba7f370a085d7e052a2ad0da7c94172a1e75de1aae5f391e45f43. Nov 4 23:57:28.665965 containerd[2578]: time="2025-11-04T23:57:28.665934885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8qnwh,Uid:9df566f7-3f37-40bc-9881-a07f7eb3701f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c0b1957b7ea3f2ddcefdef874f98512053a06aee2201fa9835d1d4626cd968a\"" Nov 4 23:57:28.716682 containerd[2578]: time="2025-11-04T23:57:28.716654742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5dl6g,Uid:7902e35b-c1ac-4fca-b41a-525a696051f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"31523bb9c15ba7f370a085d7e052a2ad0da7c94172a1e75de1aae5f391e45f43\"" Nov 4 23:57:28.717373 containerd[2578]: time="2025-11-04T23:57:28.717331275Z" level=info msg="CreateContainer within sandbox \"9c0b1957b7ea3f2ddcefdef874f98512053a06aee2201fa9835d1d4626cd968a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:57:28.807382 containerd[2578]: time="2025-11-04T23:57:28.807331022Z" level=info msg="CreateContainer within sandbox \"31523bb9c15ba7f370a085d7e052a2ad0da7c94172a1e75de1aae5f391e45f43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:57:29.063235 containerd[2578]: time="2025-11-04T23:57:29.063188720Z" level=info msg="Container 8b058315465a6529c7494427eccf82c4a61c92aa1172570fc89bfb768ded5d15: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:29.154830 containerd[2578]: time="2025-11-04T23:57:29.154785056Z" level=info msg="Container f8ab2ba22af843ab9ca77aa83112995b38b4540759e4fde91b8801494226b176: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:29.362782 containerd[2578]: time="2025-11-04T23:57:29.362646809Z" level=info msg="CreateContainer within sandbox \"9c0b1957b7ea3f2ddcefdef874f98512053a06aee2201fa9835d1d4626cd968a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b058315465a6529c7494427eccf82c4a61c92aa1172570fc89bfb768ded5d15\"" Nov 4 23:57:29.363486 containerd[2578]: time="2025-11-04T23:57:29.363387207Z" level=info msg="StartContainer for \"8b058315465a6529c7494427eccf82c4a61c92aa1172570fc89bfb768ded5d15\"" Nov 4 23:57:29.365238 containerd[2578]: time="2025-11-04T23:57:29.365191198Z" level=info msg="connecting to shim 8b058315465a6529c7494427eccf82c4a61c92aa1172570fc89bfb768ded5d15" address="unix:///run/containerd/s/8573c71607bf91429529c06811c5a75c84a1d28cb95f95fa947715aa8edf5ed1" protocol=ttrpc version=3 Nov 4 23:57:29.367406 containerd[2578]: time="2025-11-04T23:57:29.367363583Z" level=info msg="CreateContainer within sandbox \"31523bb9c15ba7f370a085d7e052a2ad0da7c94172a1e75de1aae5f391e45f43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8ab2ba22af843ab9ca77aa83112995b38b4540759e4fde91b8801494226b176\"" Nov 4 23:57:29.367812 containerd[2578]: time="2025-11-04T23:57:29.367789015Z" level=info msg="StartContainer for \"f8ab2ba22af843ab9ca77aa83112995b38b4540759e4fde91b8801494226b176\"" Nov 4 23:57:29.368913 containerd[2578]: time="2025-11-04T23:57:29.368878794Z" level=info msg="connecting to shim f8ab2ba22af843ab9ca77aa83112995b38b4540759e4fde91b8801494226b176" address="unix:///run/containerd/s/1f30f9a482549dc8fcd59654b89e35d3f924c6468383ff4d2675891905a205e8" protocol=ttrpc version=3 Nov 4 23:57:29.388311 systemd[1]: Started cri-containerd-8b058315465a6529c7494427eccf82c4a61c92aa1172570fc89bfb768ded5d15.scope - libcontainer container 8b058315465a6529c7494427eccf82c4a61c92aa1172570fc89bfb768ded5d15. Nov 4 23:57:29.395325 systemd[1]: Started cri-containerd-f8ab2ba22af843ab9ca77aa83112995b38b4540759e4fde91b8801494226b176.scope - libcontainer container f8ab2ba22af843ab9ca77aa83112995b38b4540759e4fde91b8801494226b176. Nov 4 23:57:29.433516 containerd[2578]: time="2025-11-04T23:57:29.432948746Z" level=info msg="StartContainer for \"8b058315465a6529c7494427eccf82c4a61c92aa1172570fc89bfb768ded5d15\" returns successfully" Nov 4 23:57:29.463517 containerd[2578]: time="2025-11-04T23:57:29.463314508Z" level=info msg="StartContainer for \"f8ab2ba22af843ab9ca77aa83112995b38b4540759e4fde91b8801494226b176\" returns successfully" Nov 4 23:57:29.538398 containerd[2578]: time="2025-11-04T23:57:29.538363225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"e475e4e06046f7a72c6efa405d524188f20c2c2de635e5d8ae8309e9c7b0ef12\" pid:5391 exited_at:{seconds:1762300649 nanos:537461085}" Nov 4 23:57:30.143410 kubelet[4028]: I1104 23:57:30.143350 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5dl6g" podStartSLOduration=53.143323806 podStartE2EDuration="53.143323806s" podCreationTimestamp="2025-11-04 23:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:57:30.11848953 +0000 UTC m=+52.344629191" watchObservedRunningTime="2025-11-04 23:57:30.143323806 +0000 UTC m=+52.369463461" Nov 4 23:57:30.196225 kubelet[4028]: I1104 23:57:30.196169 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8qnwh" podStartSLOduration=53.196150425 podStartE2EDuration="53.196150425s" podCreationTimestamp="2025-11-04 23:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:57:30.195539335 +0000 UTC m=+52.421678991" watchObservedRunningTime="2025-11-04 23:57:30.196150425 +0000 UTC m=+52.422290109" Nov 4 23:57:31.611689 containerd[2578]: time="2025-11-04T23:57:31.611646619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"200fec41325bc4a7587912acf9e0f039d4c6c374b2c97a3d85e655e750b413b4\" pid:5423 exited_at:{seconds:1762300651 nanos:611355892}" Nov 4 23:57:31.730320 containerd[2578]: time="2025-11-04T23:57:31.730281140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"a68666760f7e3fcb48f5cccdf83b4c576a54ad00cf0fa115e4d968a326effd98\" pid:5453 exited_at:{seconds:1762300651 nanos:729965514}" Nov 4 23:57:32.192622 sudo[3022]: pam_unix(sudo:session): session closed for user root Nov 4 23:57:32.295375 sshd[3019]: Connection closed by 10.200.16.10 port 37024 Nov 4 23:57:32.295920 sshd-session[3013]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:32.299307 systemd[1]: sshd@6-10.200.8.17:22-10.200.16.10:37024.service: Deactivated successfully. Nov 4 23:57:32.301640 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:57:32.301832 systemd[1]: session-9.scope: Consumed 4.824s CPU time, 271.7M memory peak. Nov 4 23:57:32.303135 systemd-logind[2547]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:57:32.305218 systemd-logind[2547]: Removed session 9. Nov 4 23:59:03.347249 systemd[1]: Started sshd@7-10.200.8.17:22-10.200.16.10:40422.service - OpenSSH per-connection server daemon (10.200.16.10:40422). Nov 4 23:59:03.977801 sshd[5498]: Accepted publickey for core from 10.200.16.10 port 40422 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:03.979441 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:03.985010 systemd-logind[2547]: New session 10 of user core. Nov 4 23:59:03.989400 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:59:04.496013 sshd[5501]: Connection closed by 10.200.16.10 port 40422 Nov 4 23:59:04.496547 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:04.499430 systemd[1]: sshd@7-10.200.8.17:22-10.200.16.10:40422.service: Deactivated successfully. Nov 4 23:59:04.501229 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:59:04.503276 systemd-logind[2547]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:59:04.503989 systemd-logind[2547]: Removed session 10. Nov 4 23:59:09.617348 systemd[1]: Started sshd@8-10.200.8.17:22-10.200.16.10:40436.service - OpenSSH per-connection server daemon (10.200.16.10:40436). Nov 4 23:59:10.264619 sshd[5514]: Accepted publickey for core from 10.200.16.10 port 40436 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:10.265783 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:10.270289 systemd-logind[2547]: New session 11 of user core. Nov 4 23:59:10.277259 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:59:10.757672 sshd[5517]: Connection closed by 10.200.16.10 port 40436 Nov 4 23:59:10.758260 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:10.761670 systemd[1]: sshd@8-10.200.8.17:22-10.200.16.10:40436.service: Deactivated successfully. Nov 4 23:59:10.763501 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:59:10.764333 systemd-logind[2547]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:59:10.765883 systemd-logind[2547]: Removed session 11. Nov 4 23:59:15.882241 systemd[1]: Started sshd@9-10.200.8.17:22-10.200.16.10:46680.service - OpenSSH per-connection server daemon (10.200.16.10:46680). Nov 4 23:59:16.514116 sshd[5532]: Accepted publickey for core from 10.200.16.10 port 46680 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:16.515259 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:16.519145 systemd-logind[2547]: New session 12 of user core. Nov 4 23:59:16.527399 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:59:17.015272 sshd[5535]: Connection closed by 10.200.16.10 port 46680 Nov 4 23:59:17.015841 sshd-session[5532]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:17.019206 systemd[1]: sshd@9-10.200.8.17:22-10.200.16.10:46680.service: Deactivated successfully. Nov 4 23:59:17.021019 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:59:17.021759 systemd-logind[2547]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:59:17.023002 systemd-logind[2547]: Removed session 12. Nov 4 23:59:22.133386 systemd[1]: Started sshd@10-10.200.8.17:22-10.200.16.10:50790.service - OpenSSH per-connection server daemon (10.200.16.10:50790). Nov 4 23:59:22.807619 sshd[5548]: Accepted publickey for core from 10.200.16.10 port 50790 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:22.808837 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:22.812896 systemd-logind[2547]: New session 13 of user core. Nov 4 23:59:22.817261 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:59:23.300201 sshd[5551]: Connection closed by 10.200.16.10 port 50790 Nov 4 23:59:23.300776 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:23.305138 systemd-logind[2547]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:59:23.306133 systemd[1]: sshd@10-10.200.8.17:22-10.200.16.10:50790.service: Deactivated successfully. Nov 4 23:59:23.309828 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:59:23.311710 systemd-logind[2547]: Removed session 13. Nov 4 23:59:23.420479 systemd[1]: Started sshd@11-10.200.8.17:22-10.200.16.10:50802.service - OpenSSH per-connection server daemon (10.200.16.10:50802). Nov 4 23:59:23.656617 update_engine[2548]: I20251104 23:59:23.656508 2548 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 4 23:59:23.656617 update_engine[2548]: I20251104 23:59:23.656551 2548 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 4 23:59:23.656977 update_engine[2548]: I20251104 23:59:23.656695 2548 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 4 23:59:23.657132 update_engine[2548]: I20251104 23:59:23.657070 2548 omaha_request_params.cc:62] Current group set to alpha Nov 4 23:59:23.657402 update_engine[2548]: I20251104 23:59:23.657387 2548 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 4 23:59:23.657446 update_engine[2548]: I20251104 23:59:23.657439 2548 update_attempter.cc:643] Scheduling an action processor start. Nov 4 23:59:23.657497 update_engine[2548]: I20251104 23:59:23.657490 2548 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 4 23:59:23.657552 update_engine[2548]: I20251104 23:59:23.657544 2548 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 4 23:59:23.657632 update_engine[2548]: I20251104 23:59:23.657624 2548 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 4 23:59:23.657663 update_engine[2548]: I20251104 23:59:23.657655 2548 omaha_request_action.cc:272] Request: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657663 update_engine[2548]: Nov 4 23:59:23.657846 locksmithd[2667]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 4 23:59:23.658029 update_engine[2548]: I20251104 23:59:23.658010 2548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:59:23.658904 update_engine[2548]: I20251104 23:59:23.658881 2548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:59:23.659403 update_engine[2548]: I20251104 23:59:23.659379 2548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:59:23.724921 update_engine[2548]: E20251104 23:59:23.724874 2548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:59:23.725030 update_engine[2548]: I20251104 23:59:23.724970 2548 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 4 23:59:24.049718 sshd[5564]: Accepted publickey for core from 10.200.16.10 port 50802 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:24.051081 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:24.055561 systemd-logind[2547]: New session 14 of user core. Nov 4 23:59:24.063228 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:59:24.583654 sshd[5567]: Connection closed by 10.200.16.10 port 50802 Nov 4 23:59:24.584213 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:24.587585 systemd[1]: sshd@11-10.200.8.17:22-10.200.16.10:50802.service: Deactivated successfully. Nov 4 23:59:24.589564 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:59:24.590362 systemd-logind[2547]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:59:24.592005 systemd-logind[2547]: Removed session 14. Nov 4 23:59:24.696319 systemd[1]: Started sshd@12-10.200.8.17:22-10.200.16.10:50806.service - OpenSSH per-connection server daemon (10.200.16.10:50806). Nov 4 23:59:25.338566 sshd[5577]: Accepted publickey for core from 10.200.16.10 port 50806 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:25.339888 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:25.344272 systemd-logind[2547]: New session 15 of user core. Nov 4 23:59:25.352242 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:59:25.831229 sshd[5580]: Connection closed by 10.200.16.10 port 50806 Nov 4 23:59:25.831779 sshd-session[5577]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:25.835317 systemd[1]: sshd@12-10.200.8.17:22-10.200.16.10:50806.service: Deactivated successfully. Nov 4 23:59:25.837196 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:59:25.838306 systemd-logind[2547]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:59:25.839666 systemd-logind[2547]: Removed session 15. Nov 4 23:59:30.942187 systemd[1]: Started sshd@13-10.200.8.17:22-10.200.16.10:52918.service - OpenSSH per-connection server daemon (10.200.16.10:52918). Nov 4 23:59:31.583593 sshd[5592]: Accepted publickey for core from 10.200.16.10 port 52918 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:31.584762 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:31.589513 systemd-logind[2547]: New session 16 of user core. Nov 4 23:59:31.598216 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:59:32.074751 sshd[5595]: Connection closed by 10.200.16.10 port 52918 Nov 4 23:59:32.075286 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:32.078805 systemd-logind[2547]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:59:32.079107 systemd[1]: sshd@13-10.200.8.17:22-10.200.16.10:52918.service: Deactivated successfully. Nov 4 23:59:32.080714 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:59:32.082404 systemd-logind[2547]: Removed session 16. Nov 4 23:59:32.188899 systemd[1]: Started sshd@14-10.200.8.17:22-10.200.16.10:52924.service - OpenSSH per-connection server daemon (10.200.16.10:52924). Nov 4 23:59:32.817019 sshd[5607]: Accepted publickey for core from 10.200.16.10 port 52924 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:32.817469 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:32.822018 systemd-logind[2547]: New session 17 of user core. Nov 4 23:59:32.826223 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:59:33.370249 sshd[5610]: Connection closed by 10.200.16.10 port 52924 Nov 4 23:59:33.370715 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:33.374060 systemd[1]: sshd@14-10.200.8.17:22-10.200.16.10:52924.service: Deactivated successfully. Nov 4 23:59:33.375854 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:59:33.376667 systemd-logind[2547]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:59:33.378448 systemd-logind[2547]: Removed session 17. Nov 4 23:59:33.483359 systemd[1]: Started sshd@15-10.200.8.17:22-10.200.16.10:52936.service - OpenSSH per-connection server daemon (10.200.16.10:52936). Nov 4 23:59:33.656261 update_engine[2548]: I20251104 23:59:33.656128 2548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:59:33.656261 update_engine[2548]: I20251104 23:59:33.656242 2548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:59:33.656598 update_engine[2548]: I20251104 23:59:33.656563 2548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:59:33.682920 update_engine[2548]: E20251104 23:59:33.682884 2548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:59:33.683013 update_engine[2548]: I20251104 23:59:33.682968 2548 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 4 23:59:34.118584 sshd[5620]: Accepted publickey for core from 10.200.16.10 port 52936 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:34.119676 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:34.124043 systemd-logind[2547]: New session 18 of user core. Nov 4 23:59:34.133229 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:59:35.062739 sshd[5623]: Connection closed by 10.200.16.10 port 52936 Nov 4 23:59:35.064262 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:35.067486 systemd[1]: sshd@15-10.200.8.17:22-10.200.16.10:52936.service: Deactivated successfully. Nov 4 23:59:35.069562 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:59:35.070306 systemd-logind[2547]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:59:35.071454 systemd-logind[2547]: Removed session 18. Nov 4 23:59:35.173008 systemd[1]: Started sshd@16-10.200.8.17:22-10.200.16.10:52938.service - OpenSSH per-connection server daemon (10.200.16.10:52938). Nov 4 23:59:35.802576 sshd[5638]: Accepted publickey for core from 10.200.16.10 port 52938 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:35.803674 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:35.807847 systemd-logind[2547]: New session 19 of user core. Nov 4 23:59:35.814228 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:59:36.370705 sshd[5641]: Connection closed by 10.200.16.10 port 52938 Nov 4 23:59:36.371234 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:36.374562 systemd[1]: sshd@16-10.200.8.17:22-10.200.16.10:52938.service: Deactivated successfully. Nov 4 23:59:36.376181 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:59:36.376900 systemd-logind[2547]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:59:36.377988 systemd-logind[2547]: Removed session 19. Nov 4 23:59:36.485349 systemd[1]: Started sshd@17-10.200.8.17:22-10.200.16.10:52954.service - OpenSSH per-connection server daemon (10.200.16.10:52954). Nov 4 23:59:37.117986 sshd[5653]: Accepted publickey for core from 10.200.16.10 port 52954 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:37.119076 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:37.123290 systemd-logind[2547]: New session 20 of user core. Nov 4 23:59:37.134230 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:59:37.609828 sshd[5656]: Connection closed by 10.200.16.10 port 52954 Nov 4 23:59:37.610328 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:37.613688 systemd[1]: sshd@17-10.200.8.17:22-10.200.16.10:52954.service: Deactivated successfully. Nov 4 23:59:37.615411 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:59:37.616665 systemd-logind[2547]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:59:37.617565 systemd-logind[2547]: Removed session 20. Nov 4 23:59:42.731116 systemd[1]: Started sshd@18-10.200.8.17:22-10.200.16.10:41238.service - OpenSSH per-connection server daemon (10.200.16.10:41238). Nov 4 23:59:43.364614 sshd[5672]: Accepted publickey for core from 10.200.16.10 port 41238 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:43.365770 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:43.369928 systemd-logind[2547]: New session 21 of user core. Nov 4 23:59:43.376236 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:59:43.656513 update_engine[2548]: I20251104 23:59:43.656394 2548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:59:43.656513 update_engine[2548]: I20251104 23:59:43.656474 2548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:59:43.656848 update_engine[2548]: I20251104 23:59:43.656816 2548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:59:43.684668 update_engine[2548]: E20251104 23:59:43.684633 2548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:59:43.684758 update_engine[2548]: I20251104 23:59:43.684715 2548 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 4 23:59:43.865782 sshd[5675]: Connection closed by 10.200.16.10 port 41238 Nov 4 23:59:43.866280 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:43.869692 systemd[1]: sshd@18-10.200.8.17:22-10.200.16.10:41238.service: Deactivated successfully. Nov 4 23:59:43.871650 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:59:43.872371 systemd-logind[2547]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:59:43.873562 systemd-logind[2547]: Removed session 21. Nov 4 23:59:49.003741 systemd[1]: Started sshd@19-10.200.8.17:22-10.200.16.10:41240.service - OpenSSH per-connection server daemon (10.200.16.10:41240). Nov 4 23:59:49.633489 sshd[5689]: Accepted publickey for core from 10.200.16.10 port 41240 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:49.634849 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:49.639152 systemd-logind[2547]: New session 22 of user core. Nov 4 23:59:49.647235 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:59:50.127543 sshd[5692]: Connection closed by 10.200.16.10 port 41240 Nov 4 23:59:50.128064 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:50.131516 systemd[1]: sshd@19-10.200.8.17:22-10.200.16.10:41240.service: Deactivated successfully. Nov 4 23:59:50.133101 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:59:50.134234 systemd-logind[2547]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:59:50.135624 systemd-logind[2547]: Removed session 22. Nov 4 23:59:50.242000 systemd[1]: Started sshd@20-10.200.8.17:22-10.200.16.10:48704.service - OpenSSH per-connection server daemon (10.200.16.10:48704). Nov 4 23:59:50.869834 sshd[5704]: Accepted publickey for core from 10.200.16.10 port 48704 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:50.870934 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:50.874959 systemd-logind[2547]: New session 23 of user core. Nov 4 23:59:50.879223 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:59:52.540281 containerd[2578]: time="2025-11-04T23:59:52.540226347Z" level=info msg="StopContainer for \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" with timeout 30 (s)" Nov 4 23:59:52.540824 containerd[2578]: time="2025-11-04T23:59:52.540743123Z" level=info msg="Stop container \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" with signal terminated" Nov 4 23:59:52.559017 containerd[2578]: time="2025-11-04T23:59:52.558977058Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:59:52.578119 containerd[2578]: time="2025-11-04T23:59:52.577808914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"e927b5883af0ae466edb9340104cd7debbff432c141d1a3972e71c2b5a0a9266\" pid:5726 exited_at:{seconds:1762300792 nanos:577577298}" Nov 4 23:59:52.579192 containerd[2578]: time="2025-11-04T23:59:52.579169913Z" level=info msg="StopContainer for \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" with timeout 2 (s)" Nov 4 23:59:52.580339 containerd[2578]: time="2025-11-04T23:59:52.580318623Z" level=info msg="Stop container \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" with signal terminated" Nov 4 23:59:52.593492 systemd-networkd[2207]: lxc_health: Link DOWN Nov 4 23:59:52.593500 systemd-networkd[2207]: lxc_health: Lost carrier Nov 4 23:59:52.617265 systemd[1]: cri-containerd-965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0.scope: Deactivated successfully. Nov 4 23:59:52.618635 systemd[1]: cri-containerd-aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299.scope: Deactivated successfully. Nov 4 23:59:52.618917 systemd[1]: cri-containerd-aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299.scope: Consumed 5.715s CPU time, 139.4M memory peak, 136K read from disk, 14.3M written to disk. Nov 4 23:59:52.619930 containerd[2578]: time="2025-11-04T23:59:52.619893603Z" level=info msg="received exit event container_id:\"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" id:\"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" pid:4579 exited_at:{seconds:1762300792 nanos:619639822}" Nov 4 23:59:52.620545 containerd[2578]: time="2025-11-04T23:59:52.620513690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" id:\"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" pid:4579 exited_at:{seconds:1762300792 nanos:619639822}" Nov 4 23:59:52.622035 containerd[2578]: time="2025-11-04T23:59:52.622000343Z" level=info msg="received exit event container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" pid:4652 exited_at:{seconds:1762300792 nanos:621861470}" Nov 4 23:59:52.622522 containerd[2578]: time="2025-11-04T23:59:52.622499490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" id:\"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" pid:4652 exited_at:{seconds:1762300792 nanos:621861470}" Nov 4 23:59:52.654035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0-rootfs.mount: Deactivated successfully. Nov 4 23:59:52.657680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299-rootfs.mount: Deactivated successfully. Nov 4 23:59:52.724047 containerd[2578]: time="2025-11-04T23:59:52.724024079Z" level=info msg="StopContainer for \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" returns successfully" Nov 4 23:59:52.724535 containerd[2578]: time="2025-11-04T23:59:52.724516104Z" level=info msg="StopPodSandbox for \"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\"" Nov 4 23:59:52.724604 containerd[2578]: time="2025-11-04T23:59:52.724572253Z" level=info msg="Container to stop \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:59:52.728100 containerd[2578]: time="2025-11-04T23:59:52.728052722Z" level=info msg="StopContainer for \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" returns successfully" Nov 4 23:59:52.728897 containerd[2578]: time="2025-11-04T23:59:52.728622484Z" level=info msg="StopPodSandbox for \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\"" Nov 4 23:59:52.728897 containerd[2578]: time="2025-11-04T23:59:52.728674876Z" level=info msg="Container to stop \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:59:52.728897 containerd[2578]: time="2025-11-04T23:59:52.728687051Z" level=info msg="Container to stop \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:59:52.728897 containerd[2578]: time="2025-11-04T23:59:52.728696696Z" level=info msg="Container to stop \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:59:52.728897 containerd[2578]: time="2025-11-04T23:59:52.728706203Z" level=info msg="Container to stop \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:59:52.728897 containerd[2578]: time="2025-11-04T23:59:52.728714534Z" level=info msg="Container to stop \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:59:52.731829 systemd[1]: cri-containerd-f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec.scope: Deactivated successfully. Nov 4 23:59:52.735005 containerd[2578]: time="2025-11-04T23:59:52.734977813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\" id:\"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\" pid:4203 exit_status:137 exited_at:{seconds:1762300792 nanos:734705476}" Nov 4 23:59:52.741696 systemd[1]: cri-containerd-ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd.scope: Deactivated successfully. Nov 4 23:59:52.770329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd-rootfs.mount: Deactivated successfully. Nov 4 23:59:52.774479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec-rootfs.mount: Deactivated successfully. Nov 4 23:59:52.785155 containerd[2578]: time="2025-11-04T23:59:52.785129217Z" level=info msg="shim disconnected" id=ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd namespace=k8s.io Nov 4 23:59:52.785155 containerd[2578]: time="2025-11-04T23:59:52.785150856Z" level=warning msg="cleaning up after shim disconnected" id=ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd namespace=k8s.io Nov 4 23:59:52.785298 containerd[2578]: time="2025-11-04T23:59:52.785158372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:59:52.786881 containerd[2578]: time="2025-11-04T23:59:52.786856966Z" level=info msg="shim disconnected" id=f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec namespace=k8s.io Nov 4 23:59:52.786881 containerd[2578]: time="2025-11-04T23:59:52.786880429Z" level=warning msg="cleaning up after shim disconnected" id=f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec namespace=k8s.io Nov 4 23:59:52.786972 containerd[2578]: time="2025-11-04T23:59:52.786886768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:59:52.797371 containerd[2578]: time="2025-11-04T23:59:52.797300949Z" level=info msg="received exit event sandbox_id:\"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" exit_status:137 exited_at:{seconds:1762300792 nanos:742725610}" Nov 4 23:59:52.801107 containerd[2578]: time="2025-11-04T23:59:52.798837964Z" level=info msg="TearDown network for sandbox \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" successfully" Nov 4 23:59:52.801107 containerd[2578]: time="2025-11-04T23:59:52.798858731Z" level=info msg="StopPodSandbox for \"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" returns successfully" Nov 4 23:59:52.800942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd-shm.mount: Deactivated successfully. Nov 4 23:59:52.802385 containerd[2578]: time="2025-11-04T23:59:52.801760377Z" level=info msg="received exit event sandbox_id:\"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\" exit_status:137 exited_at:{seconds:1762300792 nanos:734705476}" Nov 4 23:59:52.804080 containerd[2578]: time="2025-11-04T23:59:52.804054272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" id:\"ec02fc6ef8869f629396a5767dcc4b0484ac60679a3f157f9b8f3dd6bbe850cd\" pid:4195 exit_status:137 exited_at:{seconds:1762300792 nanos:742725610}" Nov 4 23:59:52.804315 containerd[2578]: time="2025-11-04T23:59:52.804294804Z" level=info msg="TearDown network for sandbox \"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\" successfully" Nov 4 23:59:52.804354 containerd[2578]: time="2025-11-04T23:59:52.804333533Z" level=info msg="StopPodSandbox for \"f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec\" returns successfully" Nov 4 23:59:52.917131 kubelet[4028]: I1104 23:59:52.917103 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s86sn\" (UniqueName: \"kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-kube-api-access-s86sn\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917663 kubelet[4028]: I1104 23:59:52.917135 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hostproc\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917663 kubelet[4028]: I1104 23:59:52.917152 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-bpf-maps\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917663 kubelet[4028]: I1104 23:59:52.917170 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwswv\" (UniqueName: \"kubernetes.io/projected/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-kube-api-access-wwswv\") pod \"f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16\" (UID: \"f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16\") " Nov 4 23:59:52.917663 kubelet[4028]: I1104 23:59:52.917188 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-etc-cni-netd\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917663 kubelet[4028]: I1104 23:59:52.917205 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917663 kubelet[4028]: I1104 23:59:52.917222 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917828 kubelet[4028]: I1104 23:59:52.917237 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-xtables-lock\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917828 kubelet[4028]: I1104 23:59:52.917253 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cni-path\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917828 kubelet[4028]: I1104 23:59:52.917273 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-run\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917828 kubelet[4028]: I1104 23:59:52.917288 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-kernel\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917828 kubelet[4028]: I1104 23:59:52.917309 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917828 kubelet[4028]: I1104 23:59:52.917349 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-cilium-config-path\") pod \"f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16\" (UID: \"f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16\") " Nov 4 23:59:52.917971 kubelet[4028]: I1104 23:59:52.917366 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-net\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917971 kubelet[4028]: I1104 23:59:52.917382 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-cgroup\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917971 kubelet[4028]: I1104 23:59:52.917400 4028 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-lib-modules\") pod \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\" (UID: \"50a433a6-aa81-458f-9e5f-1b9c98d0c7c7\") " Nov 4 23:59:52.917971 kubelet[4028]: I1104 23:59:52.917442 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.917971 kubelet[4028]: I1104 23:59:52.917472 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.918127 kubelet[4028]: I1104 23:59:52.917489 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.919289 kubelet[4028]: I1104 23:59:52.919239 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.922471 kubelet[4028]: I1104 23:59:52.922420 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.922471 kubelet[4028]: I1104 23:59:52.922455 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.922471 kubelet[4028]: I1104 23:59:52.922471 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.922599 kubelet[4028]: I1104 23:59:52.922484 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.922599 kubelet[4028]: I1104 23:59:52.922511 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.922599 kubelet[4028]: I1104 23:59:52.922524 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:59:52.922599 kubelet[4028]: I1104 23:59:52.922574 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-kube-api-access-wwswv" (OuterVolumeSpecName: "kube-api-access-wwswv") pod "f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16" (UID: "f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16"). InnerVolumeSpecName "kube-api-access-wwswv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:59:52.922707 kubelet[4028]: I1104 23:59:52.922629 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:59:52.924273 kubelet[4028]: I1104 23:59:52.924067 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:59:52.924273 kubelet[4028]: I1104 23:59:52.924196 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:59:52.924273 kubelet[4028]: I1104 23:59:52.924240 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-kube-api-access-s86sn" (OuterVolumeSpecName: "kube-api-access-s86sn") pod "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" (UID: "50a433a6-aa81-458f-9e5f-1b9c98d0c7c7"). InnerVolumeSpecName "kube-api-access-s86sn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:59:52.924926 kubelet[4028]: I1104 23:59:52.924903 4028 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16" (UID: "f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017557 4028 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wwswv\" (UniqueName: \"kubernetes.io/projected/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-kube-api-access-wwswv\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017590 4028 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-etc-cni-netd\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017597 4028 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hubble-tls\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017604 4028 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-clustermesh-secrets\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017611 4028 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-xtables-lock\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017618 4028 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cni-path\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017624 4028 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-run\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017662 kubelet[4028]: I1104 23:59:53.017630 4028 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-kernel\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017636 4028 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-config-path\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017644 4028 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16-cilium-config-path\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017653 4028 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-host-proc-sys-net\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017661 4028 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-cilium-cgroup\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017668 4028 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-lib-modules\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017676 4028 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s86sn\" (UniqueName: \"kubernetes.io/projected/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-kube-api-access-s86sn\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017685 4028 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-hostproc\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.017865 kubelet[4028]: I1104 23:59:53.017694 4028 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7-bpf-maps\") on node \"ci-4487.0.0-n-fda2ba6bd5\" DevicePath \"\"" Nov 4 23:59:53.044355 kubelet[4028]: E1104 23:59:53.044327 4028 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 23:59:53.377966 kubelet[4028]: I1104 23:59:53.377839 4028 scope.go:117] "RemoveContainer" containerID="aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299" Nov 4 23:59:53.381402 containerd[2578]: time="2025-11-04T23:59:53.380809091Z" level=info msg="RemoveContainer for \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\"" Nov 4 23:59:53.387293 systemd[1]: Removed slice kubepods-burstable-pod50a433a6_aa81_458f_9e5f_1b9c98d0c7c7.slice - libcontainer container kubepods-burstable-pod50a433a6_aa81_458f_9e5f_1b9c98d0c7c7.slice. Nov 4 23:59:53.387475 systemd[1]: kubepods-burstable-pod50a433a6_aa81_458f_9e5f_1b9c98d0c7c7.slice: Consumed 5.785s CPU time, 139.9M memory peak, 136K read from disk, 14.5M written to disk. Nov 4 23:59:53.388824 systemd[1]: Removed slice kubepods-besteffort-podf56fbbfb_10dc_44b1_b5d0_bedfda2eaa16.slice - libcontainer container kubepods-besteffort-podf56fbbfb_10dc_44b1_b5d0_bedfda2eaa16.slice. Nov 4 23:59:53.399982 containerd[2578]: time="2025-11-04T23:59:53.399953085Z" level=info msg="RemoveContainer for \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" returns successfully" Nov 4 23:59:53.401768 kubelet[4028]: I1104 23:59:53.400262 4028 scope.go:117] "RemoveContainer" containerID="ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a" Nov 4 23:59:53.403611 containerd[2578]: time="2025-11-04T23:59:53.403508786Z" level=info msg="RemoveContainer for \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\"" Nov 4 23:59:53.410577 containerd[2578]: time="2025-11-04T23:59:53.410539798Z" level=info msg="RemoveContainer for \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" returns successfully" Nov 4 23:59:53.410773 kubelet[4028]: I1104 23:59:53.410752 4028 scope.go:117] "RemoveContainer" containerID="3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6" Nov 4 23:59:53.413634 containerd[2578]: time="2025-11-04T23:59:53.413600407Z" level=info msg="RemoveContainer for \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\"" Nov 4 23:59:53.420259 containerd[2578]: time="2025-11-04T23:59:53.420221014Z" level=info msg="RemoveContainer for \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" returns successfully" Nov 4 23:59:53.420424 kubelet[4028]: I1104 23:59:53.420380 4028 scope.go:117] "RemoveContainer" containerID="6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1" Nov 4 23:59:53.421747 containerd[2578]: time="2025-11-04T23:59:53.421681046Z" level=info msg="RemoveContainer for \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\"" Nov 4 23:59:53.427771 containerd[2578]: time="2025-11-04T23:59:53.427747633Z" level=info msg="RemoveContainer for \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" returns successfully" Nov 4 23:59:53.427921 kubelet[4028]: I1104 23:59:53.427907 4028 scope.go:117] "RemoveContainer" containerID="ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3" Nov 4 23:59:53.429166 containerd[2578]: time="2025-11-04T23:59:53.429106811Z" level=info msg="RemoveContainer for \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\"" Nov 4 23:59:53.439507 containerd[2578]: time="2025-11-04T23:59:53.439483726Z" level=info msg="RemoveContainer for \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" returns successfully" Nov 4 23:59:53.439655 kubelet[4028]: I1104 23:59:53.439622 4028 scope.go:117] "RemoveContainer" containerID="aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299" Nov 4 23:59:53.439841 containerd[2578]: time="2025-11-04T23:59:53.439798131Z" level=error msg="ContainerStatus for \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\": not found" Nov 4 23:59:53.440005 kubelet[4028]: E1104 23:59:53.439988 4028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\": not found" containerID="aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299" Nov 4 23:59:53.440049 kubelet[4028]: I1104 23:59:53.440011 4028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299"} err="failed to get container status \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\": rpc error: code = NotFound desc = an error occurred when try to find container \"aadcce85be76e31c389acceaa0d094910dd3a94ecc874b3add98686e5f5d9299\": not found" Nov 4 23:59:53.440049 kubelet[4028]: I1104 23:59:53.440044 4028 scope.go:117] "RemoveContainer" containerID="ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a" Nov 4 23:59:53.440313 containerd[2578]: time="2025-11-04T23:59:53.440274533Z" level=error msg="ContainerStatus for \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\": not found" Nov 4 23:59:53.440466 kubelet[4028]: E1104 23:59:53.440437 4028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\": not found" containerID="ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a" Nov 4 23:59:53.440502 kubelet[4028]: I1104 23:59:53.440469 4028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a"} err="failed to get container status \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff8a320a41a82492d4d63485321ad2f6e6a4017b84f45e0d6c2ebf777db0c82a\": not found" Nov 4 23:59:53.440502 kubelet[4028]: I1104 23:59:53.440484 4028 scope.go:117] "RemoveContainer" containerID="3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6" Nov 4 23:59:53.440702 containerd[2578]: time="2025-11-04T23:59:53.440667017Z" level=error msg="ContainerStatus for \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\": not found" Nov 4 23:59:53.440838 kubelet[4028]: E1104 23:59:53.440766 4028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\": not found" containerID="3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6" Nov 4 23:59:53.440838 kubelet[4028]: I1104 23:59:53.440780 4028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6"} err="failed to get container status \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3184571132256e5271e367a882c07e7c53f0b53435838b838fbfc148fd12a2a6\": not found" Nov 4 23:59:53.440838 kubelet[4028]: I1104 23:59:53.440792 4028 scope.go:117] "RemoveContainer" containerID="6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1" Nov 4 23:59:53.441076 containerd[2578]: time="2025-11-04T23:59:53.441038987Z" level=error msg="ContainerStatus for \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\": not found" Nov 4 23:59:53.441294 kubelet[4028]: E1104 23:59:53.441175 4028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\": not found" containerID="6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1" Nov 4 23:59:53.441294 kubelet[4028]: I1104 23:59:53.441191 4028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1"} err="failed to get container status \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b5d30cc1e8322a002d57e9c89d457b9f9e0a5f2935f80a6cbcda66df50b47a1\": not found" Nov 4 23:59:53.441294 kubelet[4028]: I1104 23:59:53.441202 4028 scope.go:117] "RemoveContainer" containerID="ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3" Nov 4 23:59:53.441480 containerd[2578]: time="2025-11-04T23:59:53.441457057Z" level=error msg="ContainerStatus for \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\": not found" Nov 4 23:59:53.441689 kubelet[4028]: E1104 23:59:53.441570 4028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\": not found" containerID="ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3" Nov 4 23:59:53.441689 kubelet[4028]: I1104 23:59:53.441587 4028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3"} err="failed to get container status \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef6099d6910a4cdcf961ada1bf66184fab5961b9cfa689d199290f5fcdb7c2a3\": not found" Nov 4 23:59:53.441689 kubelet[4028]: I1104 23:59:53.441624 4028 scope.go:117] "RemoveContainer" containerID="965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0" Nov 4 23:59:53.443609 containerd[2578]: time="2025-11-04T23:59:53.443588440Z" level=info msg="RemoveContainer for \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\"" Nov 4 23:59:53.449726 containerd[2578]: time="2025-11-04T23:59:53.449695275Z" level=info msg="RemoveContainer for \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" returns successfully" Nov 4 23:59:53.449893 kubelet[4028]: I1104 23:59:53.449861 4028 scope.go:117] "RemoveContainer" containerID="965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0" Nov 4 23:59:53.450063 containerd[2578]: time="2025-11-04T23:59:53.450038982Z" level=error msg="ContainerStatus for \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\": not found" Nov 4 23:59:53.450253 kubelet[4028]: E1104 23:59:53.450237 4028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\": not found" containerID="965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0" Nov 4 23:59:53.450309 kubelet[4028]: I1104 23:59:53.450257 4028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0"} err="failed to get container status \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\": rpc error: code = NotFound desc = an error occurred when try to find container \"965353ed5eeda6e21ce3d7b3853c3bfe6b8428addc61ed8299a9a66078751ce0\": not found" Nov 4 23:59:53.651367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3796ea796a374095feb5c87d1f463e1b1cf89d66df39b187d2f1fbf19f071ec-shm.mount: Deactivated successfully. Nov 4 23:59:53.651460 systemd[1]: var-lib-kubelet-pods-50a433a6\x2daa81\x2d458f\x2d9e5f\x2d1b9c98d0c7c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 23:59:53.651521 systemd[1]: var-lib-kubelet-pods-50a433a6\x2daa81\x2d458f\x2d9e5f\x2d1b9c98d0c7c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 23:59:53.651578 systemd[1]: var-lib-kubelet-pods-f56fbbfb\x2d10dc\x2d44b1\x2db5d0\x2dbedfda2eaa16-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwswv.mount: Deactivated successfully. Nov 4 23:59:53.651632 systemd[1]: var-lib-kubelet-pods-50a433a6\x2daa81\x2d458f\x2d9e5f\x2d1b9c98d0c7c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds86sn.mount: Deactivated successfully. Nov 4 23:59:53.656162 update_engine[2548]: I20251104 23:59:53.656123 2548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:59:53.656403 update_engine[2548]: I20251104 23:59:53.656190 2548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:59:53.656558 update_engine[2548]: I20251104 23:59:53.656538 2548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:59:53.687260 update_engine[2548]: E20251104 23:59:53.687227 2548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:59:53.687343 update_engine[2548]: I20251104 23:59:53.687288 2548 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 4 23:59:53.687343 update_engine[2548]: I20251104 23:59:53.687299 2548 omaha_request_action.cc:617] Omaha request response: Nov 4 23:59:53.687392 update_engine[2548]: E20251104 23:59:53.687371 2548 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 4 23:59:53.687415 update_engine[2548]: I20251104 23:59:53.687388 2548 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 4 23:59:53.687415 update_engine[2548]: I20251104 23:59:53.687393 2548 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 4 23:59:53.687415 update_engine[2548]: I20251104 23:59:53.687398 2548 update_attempter.cc:306] Processing Done. Nov 4 23:59:53.687478 update_engine[2548]: E20251104 23:59:53.687414 2548 update_attempter.cc:619] Update failed. Nov 4 23:59:53.687478 update_engine[2548]: I20251104 23:59:53.687418 2548 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 4 23:59:53.687478 update_engine[2548]: I20251104 23:59:53.687423 2548 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 4 23:59:53.687478 update_engine[2548]: I20251104 23:59:53.687428 2548 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 4 23:59:53.687564 update_engine[2548]: I20251104 23:59:53.687501 2548 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 4 23:59:53.687564 update_engine[2548]: I20251104 23:59:53.687526 2548 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 4 23:59:53.687564 update_engine[2548]: I20251104 23:59:53.687530 2548 omaha_request_action.cc:272] Request: Nov 4 23:59:53.687564 update_engine[2548]: Nov 4 23:59:53.687564 update_engine[2548]: Nov 4 23:59:53.687564 update_engine[2548]: Nov 4 23:59:53.687564 update_engine[2548]: Nov 4 23:59:53.687564 update_engine[2548]: Nov 4 23:59:53.687564 update_engine[2548]: Nov 4 23:59:53.687564 update_engine[2548]: I20251104 23:59:53.687536 2548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:59:53.687564 update_engine[2548]: I20251104 23:59:53.687552 2548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:59:53.687841 update_engine[2548]: I20251104 23:59:53.687789 2548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:59:53.688076 locksmithd[2667]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 4 23:59:53.751995 update_engine[2548]: E20251104 23:59:53.751700 2548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:59:53.751995 update_engine[2548]: I20251104 23:59:53.751761 2548 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 4 23:59:53.751995 update_engine[2548]: I20251104 23:59:53.751768 2548 omaha_request_action.cc:617] Omaha request response: Nov 4 23:59:53.751995 update_engine[2548]: I20251104 23:59:53.751776 2548 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 4 23:59:53.751995 update_engine[2548]: I20251104 23:59:53.751781 2548 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 4 23:59:53.751995 update_engine[2548]: I20251104 23:59:53.751786 2548 update_attempter.cc:306] Processing Done. Nov 4 23:59:53.751995 update_engine[2548]: I20251104 23:59:53.751794 2548 update_attempter.cc:310] Error event sent. Nov 4 23:59:53.751995 update_engine[2548]: I20251104 23:59:53.751803 2548 update_check_scheduler.cc:74] Next update check in 40m17s Nov 4 23:59:53.753069 locksmithd[2667]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 4 23:59:53.953416 kubelet[4028]: I1104 23:59:53.953383 4028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a433a6-aa81-458f-9e5f-1b9c98d0c7c7" path="/var/lib/kubelet/pods/50a433a6-aa81-458f-9e5f-1b9c98d0c7c7/volumes" Nov 4 23:59:53.953887 kubelet[4028]: I1104 23:59:53.953865 4028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16" path="/var/lib/kubelet/pods/f56fbbfb-10dc-44b1-b5d0-bedfda2eaa16/volumes" Nov 4 23:59:54.546171 sshd[5707]: Connection closed by 10.200.16.10 port 48704 Nov 4 23:59:54.546754 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:54.550939 systemd-logind[2547]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:59:54.551450 systemd[1]: sshd@20-10.200.8.17:22-10.200.16.10:48704.service: Deactivated successfully. Nov 4 23:59:54.553310 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:59:54.554930 systemd-logind[2547]: Removed session 23. Nov 4 23:59:54.659867 systemd[1]: Started sshd@21-10.200.8.17:22-10.200.16.10:48706.service - OpenSSH per-connection server daemon (10.200.16.10:48706). Nov 4 23:59:55.295062 sshd[5856]: Accepted publickey for core from 10.200.16.10 port 48706 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:55.296230 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:55.300568 systemd-logind[2547]: New session 24 of user core. Nov 4 23:59:55.309220 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:59:56.178846 systemd[1]: Created slice kubepods-burstable-pod6b76a218_e31e_4b1d_aeea_e2686e558abb.slice - libcontainer container kubepods-burstable-pod6b76a218_e31e_4b1d_aeea_e2686e558abb.slice. Nov 4 23:59:56.267968 sshd[5859]: Connection closed by 10.200.16.10 port 48706 Nov 4 23:59:56.269268 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:56.271806 systemd[1]: sshd@21-10.200.8.17:22-10.200.16.10:48706.service: Deactivated successfully. Nov 4 23:59:56.273336 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:59:56.275027 systemd-logind[2547]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:59:56.275767 systemd-logind[2547]: Removed session 24. Nov 4 23:59:56.333629 kubelet[4028]: I1104 23:59:56.333604 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-hostproc\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.333898 kubelet[4028]: I1104 23:59:56.333632 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b76a218-e31e-4b1d-aeea-e2686e558abb-hubble-tls\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.333898 kubelet[4028]: I1104 23:59:56.333649 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-cilium-run\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.333898 kubelet[4028]: I1104 23:59:56.333664 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-lib-modules\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.333898 kubelet[4028]: I1104 23:59:56.333680 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-cni-path\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.333898 kubelet[4028]: I1104 23:59:56.333696 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-etc-cni-netd\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.333898 kubelet[4028]: I1104 23:59:56.333713 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b76a218-e31e-4b1d-aeea-e2686e558abb-clustermesh-secrets\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334027 kubelet[4028]: I1104 23:59:56.333729 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-cilium-cgroup\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334027 kubelet[4028]: I1104 23:59:56.333745 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b76a218-e31e-4b1d-aeea-e2686e558abb-cilium-config-path\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334027 kubelet[4028]: I1104 23:59:56.333763 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6b76a218-e31e-4b1d-aeea-e2686e558abb-cilium-ipsec-secrets\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334027 kubelet[4028]: I1104 23:59:56.333780 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-host-proc-sys-net\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334027 kubelet[4028]: I1104 23:59:56.333796 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmvs4\" (UniqueName: \"kubernetes.io/projected/6b76a218-e31e-4b1d-aeea-e2686e558abb-kube-api-access-rmvs4\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334132 kubelet[4028]: I1104 23:59:56.333817 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-bpf-maps\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334132 kubelet[4028]: I1104 23:59:56.333833 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-xtables-lock\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.334132 kubelet[4028]: I1104 23:59:56.333856 4028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b76a218-e31e-4b1d-aeea-e2686e558abb-host-proc-sys-kernel\") pod \"cilium-npcnx\" (UID: \"6b76a218-e31e-4b1d-aeea-e2686e558abb\") " pod="kube-system/cilium-npcnx" Nov 4 23:59:56.381160 systemd[1]: Started sshd@22-10.200.8.17:22-10.200.16.10:48708.service - OpenSSH per-connection server daemon (10.200.16.10:48708). Nov 4 23:59:56.492255 containerd[2578]: time="2025-11-04T23:59:56.492173885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npcnx,Uid:6b76a218-e31e-4b1d-aeea-e2686e558abb,Namespace:kube-system,Attempt:0,}" Nov 4 23:59:56.524263 containerd[2578]: time="2025-11-04T23:59:56.523565851Z" level=info msg="connecting to shim b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2" address="unix:///run/containerd/s/c1298c991f181967be5c0c8ea1c2a75b4876c5099603ea9deb988880da3f32da" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:59:56.546245 systemd[1]: Started cri-containerd-b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2.scope - libcontainer container b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2. Nov 4 23:59:56.566944 containerd[2578]: time="2025-11-04T23:59:56.566912560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npcnx,Uid:6b76a218-e31e-4b1d-aeea-e2686e558abb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\"" Nov 4 23:59:56.574694 containerd[2578]: time="2025-11-04T23:59:56.574663152Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:59:56.588121 containerd[2578]: time="2025-11-04T23:59:56.587601200Z" level=info msg="Container e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:59:56.599271 containerd[2578]: time="2025-11-04T23:59:56.599243555Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219\"" Nov 4 23:59:56.600106 containerd[2578]: time="2025-11-04T23:59:56.599642493Z" level=info msg="StartContainer for \"e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219\"" Nov 4 23:59:56.601503 containerd[2578]: time="2025-11-04T23:59:56.601463271Z" level=info msg="connecting to shim e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219" address="unix:///run/containerd/s/c1298c991f181967be5c0c8ea1c2a75b4876c5099603ea9deb988880da3f32da" protocol=ttrpc version=3 Nov 4 23:59:56.620236 systemd[1]: Started cri-containerd-e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219.scope - libcontainer container e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219. Nov 4 23:59:56.645820 systemd[1]: cri-containerd-e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219.scope: Deactivated successfully. Nov 4 23:59:56.647667 containerd[2578]: time="2025-11-04T23:59:56.647634053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219\" id:\"e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219\" pid:5934 exited_at:{seconds:1762300796 nanos:647332746}" Nov 4 23:59:56.650848 containerd[2578]: time="2025-11-04T23:59:56.650701445Z" level=info msg="received exit event container_id:\"e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219\" id:\"e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219\" pid:5934 exited_at:{seconds:1762300796 nanos:647332746}" Nov 4 23:59:56.651760 containerd[2578]: time="2025-11-04T23:59:56.651739412Z" level=info msg="StartContainer for \"e2271971b880ac6748765c342dc7e7e29f7fca5a128345212b7a05ca6a585219\" returns successfully" Nov 4 23:59:57.014027 sshd[5870]: Accepted publickey for core from 10.200.16.10 port 48708 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:57.014453 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:57.018748 systemd-logind[2547]: New session 25 of user core. Nov 4 23:59:57.022229 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 23:59:57.403174 containerd[2578]: time="2025-11-04T23:59:57.401677507Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:59:57.418478 containerd[2578]: time="2025-11-04T23:59:57.418450722Z" level=info msg="Container d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:59:57.429885 containerd[2578]: time="2025-11-04T23:59:57.429852080Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2\"" Nov 4 23:59:57.431159 containerd[2578]: time="2025-11-04T23:59:57.430258089Z" level=info msg="StartContainer for \"d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2\"" Nov 4 23:59:57.431159 containerd[2578]: time="2025-11-04T23:59:57.430966822Z" level=info msg="connecting to shim d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2" address="unix:///run/containerd/s/c1298c991f181967be5c0c8ea1c2a75b4876c5099603ea9deb988880da3f32da" protocol=ttrpc version=3 Nov 4 23:59:57.455254 systemd[1]: Started cri-containerd-d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2.scope - libcontainer container d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2. Nov 4 23:59:57.462885 sshd[5966]: Connection closed by 10.200.16.10 port 48708 Nov 4 23:59:57.463957 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Nov 4 23:59:57.468671 systemd[1]: sshd@22-10.200.8.17:22-10.200.16.10:48708.service: Deactivated successfully. Nov 4 23:59:57.470964 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 23:59:57.473164 systemd-logind[2547]: Session 25 logged out. Waiting for processes to exit. Nov 4 23:59:57.475831 systemd-logind[2547]: Removed session 25. Nov 4 23:59:57.493259 containerd[2578]: time="2025-11-04T23:59:57.493222065Z" level=info msg="StartContainer for \"d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2\" returns successfully" Nov 4 23:59:57.493535 systemd[1]: cri-containerd-d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2.scope: Deactivated successfully. Nov 4 23:59:57.494686 containerd[2578]: time="2025-11-04T23:59:57.494224773Z" level=info msg="received exit event container_id:\"d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2\" id:\"d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2\" pid:5982 exited_at:{seconds:1762300797 nanos:493770938}" Nov 4 23:59:57.494762 containerd[2578]: time="2025-11-04T23:59:57.494694862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2\" id:\"d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2\" pid:5982 exited_at:{seconds:1762300797 nanos:493770938}" Nov 4 23:59:57.509778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d21c684d91439d5886b5181a0f87cc349c9722ac1c8e55731cf81807028f89f2-rootfs.mount: Deactivated successfully. Nov 4 23:59:57.577851 systemd[1]: Started sshd@23-10.200.8.17:22-10.200.16.10:48724.service - OpenSSH per-connection server daemon (10.200.16.10:48724). Nov 4 23:59:58.045717 kubelet[4028]: E1104 23:59:58.045674 4028 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 23:59:58.211133 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 48724 ssh2: RSA SHA256:aEwZ8kHztvqRhUi5/7Xl0auTw/Of7fcGuW0W24+buUk Nov 4 23:59:58.211797 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:59:58.215934 systemd-logind[2547]: New session 26 of user core. Nov 4 23:59:58.220231 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 23:59:58.403643 containerd[2578]: time="2025-11-04T23:59:58.403553699Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:59:58.427117 containerd[2578]: time="2025-11-04T23:59:58.424373870Z" level=info msg="Container 16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:59:58.442472 containerd[2578]: time="2025-11-04T23:59:58.442442183Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a\"" Nov 4 23:59:58.443006 containerd[2578]: time="2025-11-04T23:59:58.442974450Z" level=info msg="StartContainer for \"16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a\"" Nov 4 23:59:58.444515 containerd[2578]: time="2025-11-04T23:59:58.444478183Z" level=info msg="connecting to shim 16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a" address="unix:///run/containerd/s/c1298c991f181967be5c0c8ea1c2a75b4876c5099603ea9deb988880da3f32da" protocol=ttrpc version=3 Nov 4 23:59:58.463281 systemd[1]: Started cri-containerd-16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a.scope - libcontainer container 16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a. Nov 4 23:59:58.495795 systemd[1]: cri-containerd-16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a.scope: Deactivated successfully. Nov 4 23:59:58.528580 containerd[2578]: time="2025-11-04T23:59:58.497525121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a\" id:\"16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a\" pid:6037 exited_at:{seconds:1762300798 nanos:497266546}" Nov 4 23:59:58.529789 containerd[2578]: time="2025-11-04T23:59:58.529754462Z" level=info msg="received exit event container_id:\"16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a\" id:\"16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a\" pid:6037 exited_at:{seconds:1762300798 nanos:497266546}" Nov 4 23:59:58.536635 containerd[2578]: time="2025-11-04T23:59:58.536592815Z" level=info msg="StartContainer for \"16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a\" returns successfully" Nov 4 23:59:58.547429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b2feb5e2a0c41c777702c6ec06a8106339e0c6291a92f4d3fe7c1fe52d802a-rootfs.mount: Deactivated successfully. Nov 4 23:59:59.409358 containerd[2578]: time="2025-11-04T23:59:59.409303438Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:59:59.425278 containerd[2578]: time="2025-11-04T23:59:59.424604409Z" level=info msg="Container f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:59:59.437457 containerd[2578]: time="2025-11-04T23:59:59.437427728Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05\"" Nov 4 23:59:59.437931 containerd[2578]: time="2025-11-04T23:59:59.437897738Z" level=info msg="StartContainer for \"f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05\"" Nov 4 23:59:59.439072 containerd[2578]: time="2025-11-04T23:59:59.439028053Z" level=info msg="connecting to shim f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05" address="unix:///run/containerd/s/c1298c991f181967be5c0c8ea1c2a75b4876c5099603ea9deb988880da3f32da" protocol=ttrpc version=3 Nov 4 23:59:59.458236 systemd[1]: Started cri-containerd-f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05.scope - libcontainer container f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05. Nov 4 23:59:59.478873 systemd[1]: cri-containerd-f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05.scope: Deactivated successfully. Nov 4 23:59:59.481138 containerd[2578]: time="2025-11-04T23:59:59.480338825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05\" id:\"f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05\" pid:6083 exited_at:{seconds:1762300799 nanos:479447631}" Nov 4 23:59:59.483930 containerd[2578]: time="2025-11-04T23:59:59.483537637Z" level=info msg="received exit event container_id:\"f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05\" id:\"f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05\" pid:6083 exited_at:{seconds:1762300799 nanos:479447631}" Nov 4 23:59:59.485949 containerd[2578]: time="2025-11-04T23:59:59.485913663Z" level=info msg="StartContainer for \"f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05\" returns successfully" Nov 4 23:59:59.503412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2228b4f5f5130947a9410a6089527583429c15bd9f27ffdccb945528ac87b05-rootfs.mount: Deactivated successfully. Nov 5 00:00:00.443830 containerd[2578]: time="2025-11-05T00:00:00.443780357Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 00:00:00.467018 containerd[2578]: time="2025-11-05T00:00:00.464318243Z" level=info msg="Container 35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:00:00.479953 containerd[2578]: time="2025-11-05T00:00:00.479920117Z" level=info msg="CreateContainer within sandbox \"b8d2a3d550a56e0b08ea4b90b68ab51d811f5e77cb68abebf42c9d0b30d185f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\"" Nov 5 00:00:00.480827 containerd[2578]: time="2025-11-05T00:00:00.480796129Z" level=info msg="StartContainer for \"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\"" Nov 5 00:00:00.482812 containerd[2578]: time="2025-11-05T00:00:00.482777331Z" level=info msg="connecting to shim 35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe" address="unix:///run/containerd/s/c1298c991f181967be5c0c8ea1c2a75b4876c5099603ea9deb988880da3f32da" protocol=ttrpc version=3 Nov 5 00:00:00.508245 systemd[1]: Started cri-containerd-35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe.scope - libcontainer container 35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe. Nov 5 00:00:00.541842 containerd[2578]: time="2025-11-05T00:00:00.541810834Z" level=info msg="StartContainer for \"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\" returns successfully" Nov 5 00:00:00.612107 containerd[2578]: time="2025-11-05T00:00:00.612059923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\" id:\"026437b3ce9b03c3c350b7f4a58f415484153d57f40a52fe9620d091f2468654\" pid:6152 exited_at:{seconds:1762300800 nanos:610704085}" Nov 5 00:00:00.917120 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Nov 5 00:00:01.431913 kubelet[4028]: I1105 00:00:01.431828 4028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-npcnx" podStartSLOduration=5.4318115989999995 podStartE2EDuration="5.431811599s" podCreationTimestamp="2025-11-04 23:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:00:01.431522592 +0000 UTC m=+203.657662253" watchObservedRunningTime="2025-11-05 00:00:01.431811599 +0000 UTC m=+203.657951262" Nov 5 00:00:01.872537 kubelet[4028]: I1105 00:00:01.871508 4028 setters.go:543] "Node became not ready" node="ci-4487.0.0-n-fda2ba6bd5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-05T00:00:01Z","lastTransitionTime":"2025-11-05T00:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 5 00:00:02.754325 containerd[2578]: time="2025-11-05T00:00:02.754279537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\" id:\"164f65197c61b54decbae9b3260c90d8c0c41f5c912c94bf86ec65f79ab5c917\" pid:6340 exit_status:1 exited_at:{seconds:1762300802 nanos:753615510}" Nov 5 00:00:03.569501 systemd-networkd[2207]: lxc_health: Link UP Nov 5 00:00:03.569689 systemd-networkd[2207]: lxc_health: Gained carrier Nov 5 00:00:04.883275 systemd-networkd[2207]: lxc_health: Gained IPv6LL Nov 5 00:00:04.890744 containerd[2578]: time="2025-11-05T00:00:04.890707849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\" id:\"6710058cc1f0658180d81254e24df0f609dadfe4defb84ad16d609e7e59dbf98\" pid:6705 exited_at:{seconds:1762300804 nanos:890224180}" Nov 5 00:00:07.048640 containerd[2578]: time="2025-11-05T00:00:07.048594319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\" id:\"b055b2c2e9632e6f46ea7a66f6469bf83871bca9256ac7d57cda5e079dd17ab4\" pid:6744 exited_at:{seconds:1762300807 nanos:48367972}" Nov 5 00:00:09.144463 containerd[2578]: time="2025-11-05T00:00:09.144421227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a694aa91551554a60e7616e085bdf648caa5af7bebfe1944574378ed55cdbe\" id:\"ddc94b321dff48a25340524659065537671a1b0c46fe14892b2a315d77553e5c\" pid:6769 exited_at:{seconds:1762300809 nanos:143978510}" Nov 5 00:00:09.249197 sshd[6021]: Connection closed by 10.200.16.10 port 48724 Nov 5 00:00:09.250297 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Nov 5 00:00:09.253988 systemd[1]: sshd@23-10.200.8.17:22-10.200.16.10:48724.service: Deactivated successfully. Nov 5 00:00:09.255904 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 00:00:09.256917 systemd-logind[2547]: Session 26 logged out. Waiting for processes to exit. Nov 5 00:00:09.258505 systemd-logind[2547]: Removed session 26.