Nov 6 00:22:19.039938 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:22:19.039966 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:19.039979 kernel: BIOS-provided physical RAM map: Nov 6 00:22:19.039986 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 00:22:19.039992 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 6 00:22:19.039999 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 6 00:22:19.040007 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 6 00:22:19.040014 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 6 00:22:19.040021 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 6 00:22:19.040030 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 6 00:22:19.040037 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 6 00:22:19.040044 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 6 00:22:19.040050 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 6 00:22:19.040055 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 6 00:22:19.040062 kernel: NX (Execute Disable) protection: active Nov 6 00:22:19.040070 kernel: APIC: Static calls initialized Nov 6 00:22:19.040076 kernel: efi: EFI v2.7 by Microsoft Nov 6 00:22:19.040083 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Nov 6 00:22:19.040089 kernel: random: crng init done Nov 6 00:22:19.040096 kernel: secureboot: Secure boot disabled Nov 6 00:22:19.040104 kernel: SMBIOS 3.1.0 present. Nov 6 00:22:19.040111 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Nov 6 00:22:19.040118 kernel: DMI: Memory slots populated: 2/2 Nov 6 00:22:19.040125 kernel: Hypervisor detected: Microsoft Hyper-V Nov 6 00:22:19.040132 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 6 00:22:19.040139 kernel: Hyper-V: Nested features: 0x3e0101 Nov 6 00:22:19.040148 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 6 00:22:19.040155 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 6 00:22:19.040162 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 00:22:19.040169 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 00:22:19.040177 kernel: tsc: Detected 2299.999 MHz processor Nov 6 00:22:19.040184 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:22:19.040192 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:22:19.040199 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 6 00:22:19.040207 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 00:22:19.040217 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:22:19.040225 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 6 00:22:19.040232 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 6 00:22:19.040239 kernel: Using GB pages for direct mapping Nov 6 00:22:19.040247 kernel: ACPI: Early table checksum verification disabled Nov 6 00:22:19.040258 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 6 00:22:19.040266 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:19.040275 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:19.040282 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 6 00:22:19.040289 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 6 00:22:19.040296 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:19.040303 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:19.040310 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:19.040318 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 6 00:22:19.040328 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 6 00:22:19.040335 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:19.040343 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 6 00:22:19.040351 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Nov 6 00:22:19.040359 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 6 00:22:19.040366 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 6 00:22:19.040374 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 6 00:22:19.040382 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 6 00:22:19.040389 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Nov 6 00:22:19.040399 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 6 00:22:19.040406 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 6 00:22:19.040414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 6 00:22:19.040422 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 6 00:22:19.040430 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 6 00:22:19.040437 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 6 00:22:19.040445 kernel: Zone ranges: Nov 6 00:22:19.040452 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:22:19.040460 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 6 00:22:19.040469 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 00:22:19.040477 kernel: Device empty Nov 6 00:22:19.040484 kernel: Movable zone start for each node Nov 6 00:22:19.040492 kernel: Early memory node ranges Nov 6 00:22:19.040499 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 6 00:22:19.040507 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 6 00:22:19.040515 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 6 00:22:19.040523 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 6 00:22:19.040530 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 00:22:19.040539 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 6 00:22:19.040547 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:22:19.040555 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 6 00:22:19.040563 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 6 00:22:19.040570 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 6 00:22:19.040578 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 6 00:22:19.040586 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:22:19.040594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:22:19.040601 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:22:19.040610 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 6 00:22:19.040618 kernel: TSC deadline timer available Nov 6 00:22:19.040626 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:22:19.040633 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:22:19.040641 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:22:19.040649 kernel: CPU topo: Max. threads per core: 2 Nov 6 00:22:19.040656 kernel: CPU topo: Num. cores per package: 1 Nov 6 00:22:19.040663 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:22:19.040671 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:22:19.040681 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 6 00:22:19.040688 kernel: Booting paravirtualized kernel on Hyper-V Nov 6 00:22:19.040696 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:22:19.040704 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:22:19.040712 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:22:19.040720 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:22:19.040728 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:22:19.040736 kernel: Hyper-V: PV spinlocks enabled Nov 6 00:22:19.040744 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:22:19.040754 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:19.040763 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 6 00:22:19.040771 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:22:19.040778 kernel: Fallback order for Node 0: 0 Nov 6 00:22:19.040786 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 6 00:22:19.040794 kernel: Policy zone: Normal Nov 6 00:22:19.040802 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:22:19.040810 kernel: software IO TLB: area num 2. Nov 6 00:22:19.040839 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:22:19.040879 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:22:19.040887 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:22:19.040895 kernel: Dynamic Preempt: voluntary Nov 6 00:22:19.040903 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:22:19.040911 kernel: rcu: RCU event tracing is enabled. Nov 6 00:22:19.040919 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:22:19.040936 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:22:19.040945 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:22:19.040953 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:22:19.040962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:22:19.040970 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:22:19.040980 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:19.040988 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:19.040996 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:19.041004 kernel: Using NULL legacy PIC Nov 6 00:22:19.041013 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 6 00:22:19.041022 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:22:19.041030 kernel: Console: colour dummy device 80x25 Nov 6 00:22:19.041038 kernel: printk: legacy console [tty1] enabled Nov 6 00:22:19.041046 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:22:19.041054 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 6 00:22:19.041063 kernel: ACPI: Core revision 20240827 Nov 6 00:22:19.041071 kernel: Failed to register legacy timer interrupt Nov 6 00:22:19.041079 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:22:19.041088 kernel: x2apic enabled Nov 6 00:22:19.041098 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:22:19.041106 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 6 00:22:19.041115 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 6 00:22:19.041124 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 6 00:22:19.041132 kernel: Hyper-V: Using IPI hypercalls Nov 6 00:22:19.041141 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 6 00:22:19.041149 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 6 00:22:19.041158 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 6 00:22:19.041167 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 6 00:22:19.041176 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 6 00:22:19.041185 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 6 00:22:19.041194 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 6 00:22:19.041203 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Nov 6 00:22:19.041211 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:22:19.041220 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 00:22:19.041228 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 00:22:19.041236 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:22:19.041244 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:22:19.041252 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:22:19.041262 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 6 00:22:19.041271 kernel: RETBleed: Vulnerable Nov 6 00:22:19.041279 kernel: Speculative Store Bypass: Vulnerable Nov 6 00:22:19.041286 kernel: active return thunk: its_return_thunk Nov 6 00:22:19.041294 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:22:19.041303 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:22:19.041311 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:22:19.041319 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:22:19.041327 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 6 00:22:19.041335 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 6 00:22:19.041345 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 6 00:22:19.041354 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 6 00:22:19.041362 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 6 00:22:19.041371 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 6 00:22:19.041379 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:22:19.041387 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 6 00:22:19.041395 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 6 00:22:19.041404 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 6 00:22:19.041412 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 6 00:22:19.041420 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 6 00:22:19.041428 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 6 00:22:19.041438 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 6 00:22:19.041446 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:22:19.041454 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:22:19.041462 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:22:19.041470 kernel: landlock: Up and running. Nov 6 00:22:19.041477 kernel: SELinux: Initializing. Nov 6 00:22:19.041485 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:22:19.041493 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:22:19.041501 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 6 00:22:19.041509 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 6 00:22:19.041517 kernel: signal: max sigframe size: 11952 Nov 6 00:22:19.041527 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:22:19.041537 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:22:19.041545 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:22:19.041554 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:22:19.041562 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:22:19.041570 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:22:19.041579 kernel: .... node #0, CPUs: #1 Nov 6 00:22:19.041587 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:22:19.041595 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 6 00:22:19.041604 kernel: Memory: 8070876K/8383228K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 306136K reserved, 0K cma-reserved) Nov 6 00:22:19.041614 kernel: devtmpfs: initialized Nov 6 00:22:19.041622 kernel: x86/mm: Memory block size: 128MB Nov 6 00:22:19.041631 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 6 00:22:19.041639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:22:19.041648 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:22:19.041655 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:22:19.041663 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:22:19.041671 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:22:19.041681 kernel: audit: type=2000 audit(1762388535.029:1): state=initialized audit_enabled=0 res=1 Nov 6 00:22:19.041688 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:22:19.041697 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:22:19.041705 kernel: cpuidle: using governor menu Nov 6 00:22:19.041712 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:22:19.041720 kernel: dca service started, version 1.12.1 Nov 6 00:22:19.041728 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 6 00:22:19.041737 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 6 00:22:19.041745 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:22:19.041756 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:22:19.041764 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:22:19.041772 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:22:19.041780 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:22:19.041788 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:22:19.041796 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:22:19.041804 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:22:19.041848 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:22:19.041857 kernel: ACPI: Interpreter enabled Nov 6 00:22:19.041868 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:22:19.041876 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:22:19.041885 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:22:19.041894 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 6 00:22:19.041902 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 6 00:22:19.041911 kernel: iommu: Default domain type: Translated Nov 6 00:22:19.041921 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:22:19.041930 kernel: efivars: Registered efivars operations Nov 6 00:22:19.041941 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:22:19.041954 kernel: PCI: System does not support PCI Nov 6 00:22:19.041965 kernel: vgaarb: loaded Nov 6 00:22:19.041975 kernel: clocksource: Switched to clocksource tsc-early Nov 6 00:22:19.041984 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:22:19.041993 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:22:19.042000 kernel: pnp: PnP ACPI init Nov 6 00:22:19.042008 kernel: pnp: PnP ACPI: found 3 devices Nov 6 00:22:19.042015 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:22:19.042022 kernel: NET: Registered PF_INET protocol family Nov 6 00:22:19.042031 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:22:19.042039 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 6 00:22:19.042046 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:22:19.042054 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:22:19.042062 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 6 00:22:19.042069 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 6 00:22:19.042077 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:22:19.042084 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:22:19.042092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:22:19.042102 kernel: NET: Registered PF_XDP protocol family Nov 6 00:22:19.042109 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:22:19.042118 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 00:22:19.042126 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Nov 6 00:22:19.042134 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 6 00:22:19.042143 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 6 00:22:19.042151 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 6 00:22:19.042160 kernel: clocksource: Switched to clocksource tsc Nov 6 00:22:19.042169 kernel: Initialise system trusted keyrings Nov 6 00:22:19.042178 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 6 00:22:19.042186 kernel: Key type asymmetric registered Nov 6 00:22:19.042194 kernel: Asymmetric key parser 'x509' registered Nov 6 00:22:19.042202 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:22:19.042210 kernel: io scheduler mq-deadline registered Nov 6 00:22:19.042218 kernel: io scheduler kyber registered Nov 6 00:22:19.042226 kernel: io scheduler bfq registered Nov 6 00:22:19.042233 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:22:19.042242 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:22:19.042251 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:22:19.042259 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 6 00:22:19.042267 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:22:19.042275 kernel: i8042: PNP: No PS/2 controller found. Nov 6 00:22:19.042417 kernel: rtc_cmos 00:02: registered as rtc0 Nov 6 00:22:19.042491 kernel: rtc_cmos 00:02: setting system clock to 2025-11-06T00:22:18 UTC (1762388538) Nov 6 00:22:19.042558 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 6 00:22:19.042570 kernel: intel_pstate: Intel P-state driver initializing Nov 6 00:22:19.042578 kernel: efifb: probing for efifb Nov 6 00:22:19.042587 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 6 00:22:19.042595 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 6 00:22:19.042604 kernel: efifb: scrolling: redraw Nov 6 00:22:19.042612 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 00:22:19.042621 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:22:19.042629 kernel: fb0: EFI VGA frame buffer device Nov 6 00:22:19.042637 kernel: pstore: Using crash dump compression: deflate Nov 6 00:22:19.042647 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 00:22:19.042656 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:22:19.042664 kernel: Segment Routing with IPv6 Nov 6 00:22:19.042673 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:22:19.042681 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:22:19.042690 kernel: Key type dns_resolver registered Nov 6 00:22:19.042697 kernel: IPI shorthand broadcast: enabled Nov 6 00:22:19.042706 kernel: sched_clock: Marking stable (3013005420, 116257108)->(3503495269, -374232741) Nov 6 00:22:19.042715 kernel: registered taskstats version 1 Nov 6 00:22:19.042724 kernel: Loading compiled-in X.509 certificates Nov 6 00:22:19.042734 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:22:19.042743 kernel: Demotion targets for Node 0: null Nov 6 00:22:19.042752 kernel: Key type .fscrypt registered Nov 6 00:22:19.042761 kernel: Key type fscrypt-provisioning registered Nov 6 00:22:19.042769 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:22:19.042777 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:22:19.042785 kernel: ima: No architecture policies found Nov 6 00:22:19.042793 kernel: clk: Disabling unused clocks Nov 6 00:22:19.042801 kernel: Warning: unable to open an initial console. Nov 6 00:22:19.042827 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:22:19.042836 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:22:19.042844 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:22:19.042852 kernel: Run /init as init process Nov 6 00:22:19.042860 kernel: with arguments: Nov 6 00:22:19.042868 kernel: /init Nov 6 00:22:19.042876 kernel: with environment: Nov 6 00:22:19.042885 kernel: HOME=/ Nov 6 00:22:19.042893 kernel: TERM=linux Nov 6 00:22:19.042905 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:22:19.042918 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:19.042927 systemd[1]: Detected virtualization microsoft. Nov 6 00:22:19.042935 systemd[1]: Detected architecture x86-64. Nov 6 00:22:19.042943 systemd[1]: Running in initrd. Nov 6 00:22:19.042951 systemd[1]: No hostname configured, using default hostname. Nov 6 00:22:19.042960 systemd[1]: Hostname set to . Nov 6 00:22:19.042969 systemd[1]: Initializing machine ID from random generator. Nov 6 00:22:19.042977 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:22:19.042985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:19.042993 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:19.043003 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:22:19.043013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:19.043021 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:22:19.043032 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:22:19.043042 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:22:19.043051 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:22:19.043060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:19.043070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:19.043079 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:22:19.043088 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:19.043097 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:19.043108 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:22:19.043118 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:19.043127 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:19.043137 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:22:19.043146 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:22:19.043155 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:19.043164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:19.043174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:19.043184 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:22:19.043195 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:22:19.043204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:19.043213 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:22:19.043222 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:22:19.043231 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:22:19.043239 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:19.043248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:19.043257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:19.043276 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:19.043288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:19.043299 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:22:19.043327 systemd-journald[186]: Collecting audit messages is disabled. Nov 6 00:22:19.043356 systemd-journald[186]: Journal started Nov 6 00:22:19.043385 systemd-journald[186]: Runtime Journal (/run/log/journal/48c07135aaa94e649d8b73204d94a894) is 8M, max 158.6M, 150.6M free. Nov 6 00:22:19.018717 systemd-modules-load[187]: Inserted module 'overlay' Nov 6 00:22:19.050669 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:19.052916 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:22:19.062855 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:22:19.063106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:19.068832 kernel: Bridge firewalling registered Nov 6 00:22:19.068807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:19.068844 systemd-modules-load[187]: Inserted module 'br_netfilter' Nov 6 00:22:19.076996 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:22:19.082689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:19.086270 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:19.088432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:19.096894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:22:19.113921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:19.119934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:19.130712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:19.137030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:19.137395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:19.139920 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:22:19.141551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:19.166298 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:19.183161 systemd-resolved[225]: Positive Trust Anchors: Nov 6 00:22:19.184892 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:19.187647 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:19.201906 systemd-resolved[225]: Defaulting to hostname 'linux'. Nov 6 00:22:19.204420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:19.206900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:19.251839 kernel: SCSI subsystem initialized Nov 6 00:22:19.259830 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:22:19.268833 kernel: iscsi: registered transport (tcp) Nov 6 00:22:19.287010 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:22:19.287118 kernel: QLogic iSCSI HBA Driver Nov 6 00:22:19.300974 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:19.318269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:19.322523 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:19.356006 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:19.358926 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:22:19.408831 kernel: raid6: avx512x4 gen() 44250 MB/s Nov 6 00:22:19.425830 kernel: raid6: avx512x2 gen() 43252 MB/s Nov 6 00:22:19.443827 kernel: raid6: avx512x1 gen() 25192 MB/s Nov 6 00:22:19.460824 kernel: raid6: avx2x4 gen() 35935 MB/s Nov 6 00:22:19.477825 kernel: raid6: avx2x2 gen() 38084 MB/s Nov 6 00:22:19.496474 kernel: raid6: avx2x1 gen() 30153 MB/s Nov 6 00:22:19.496565 kernel: raid6: using algorithm avx512x4 gen() 44250 MB/s Nov 6 00:22:19.515833 kernel: raid6: .... xor() 7438 MB/s, rmw enabled Nov 6 00:22:19.515852 kernel: raid6: using avx512x2 recovery algorithm Nov 6 00:22:19.533829 kernel: xor: automatically using best checksumming function avx Nov 6 00:22:19.656839 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:22:19.662163 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:19.666135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:19.691913 systemd-udevd[435]: Using default interface naming scheme 'v255'. Nov 6 00:22:19.696939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:19.697722 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:22:19.716175 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Nov 6 00:22:19.734606 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:19.736922 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:19.767767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:19.770064 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:22:19.816831 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:22:19.838049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:19.838253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:19.849689 kernel: hv_vmbus: Vmbus version:5.3 Nov 6 00:22:19.849723 kernel: AES CTR mode by8 optimization enabled Nov 6 00:22:19.849825 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:19.854401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:19.869828 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 6 00:22:19.878502 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 6 00:22:19.878533 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 6 00:22:19.881629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:19.883796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:19.890631 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 6 00:22:19.896805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:19.908357 kernel: hv_vmbus: registering driver hv_netvsc Nov 6 00:22:19.908390 kernel: PTP clock support registered Nov 6 00:22:19.911842 kernel: hv_vmbus: registering driver hv_pci Nov 6 00:22:19.931551 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fab60a (unnamed net_device) (uninitialized): VF slot 1 added Nov 6 00:22:19.931738 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 6 00:22:19.937089 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 00:22:19.939831 kernel: hv_vmbus: registering driver hv_storvsc Nov 6 00:22:19.947227 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 6 00:22:19.947411 kernel: scsi host0: storvsc_host_t Nov 6 00:22:19.947546 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 6 00:22:19.950698 kernel: hv_utils: Registering HyperV Utility Driver Nov 6 00:22:19.957057 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 6 00:22:19.957255 kernel: hv_vmbus: registering driver hv_utils Nov 6 00:22:19.957272 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 6 00:22:19.955950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:20.061275 kernel: hv_vmbus: registering driver hid_hyperv Nov 6 00:22:20.061289 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 6 00:22:20.061316 kernel: hv_utils: Shutdown IC version 3.2 Nov 6 00:22:20.061323 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 6 00:22:20.061335 kernel: hv_utils: Heartbeat IC version 3.0 Nov 6 00:22:20.061342 kernel: hv_utils: TimeSync IC version 4.0 Nov 6 00:22:20.061348 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 6 00:22:20.061357 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 6 00:22:20.068582 systemd-resolved[225]: Clock change detected. Flushing caches. Nov 6 00:22:20.088408 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 6 00:22:20.088553 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 6 00:22:20.098849 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 6 00:22:20.099102 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:22:20.102967 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 6 00:22:20.113626 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 6 00:22:20.113818 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 6 00:22:20.124089 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#184 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:22:20.146002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:22:20.267973 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 6 00:22:20.273970 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:22:20.556992 kernel: nvme nvme0: using unchecked data buffer Nov 6 00:22:20.808321 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 6 00:22:20.830426 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 6 00:22:20.836827 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 6 00:22:20.859166 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 6 00:22:20.859480 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:20.875194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 6 00:22:20.875859 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:20.876554 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:20.876583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:20.877521 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:22:20.879065 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:22:20.909818 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:20.920709 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:22:21.042978 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 6 00:22:21.052584 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 6 00:22:21.052742 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 6 00:22:21.055587 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 6 00:22:21.080316 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 6 00:22:21.080370 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 6 00:22:21.080390 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 6 00:22:21.080407 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 6 00:22:21.098301 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 6 00:22:21.098426 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 6 00:22:21.098537 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 6 00:22:21.101680 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 6 00:22:21.111967 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 6 00:22:21.115630 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fab60a eth0: VF registering: eth1 Nov 6 00:22:21.115806 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 6 00:22:21.118970 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 6 00:22:21.929974 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:22:21.930037 disk-uuid[663]: The operation has completed successfully. Nov 6 00:22:21.993858 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:22:21.993945 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:22:22.035781 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:22:22.044189 sh[700]: Success Nov 6 00:22:22.073418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:22:22.073463 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:22:22.074341 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:22:22.082974 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 6 00:22:22.330027 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:22:22.336076 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:22:22.349492 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:22:22.365972 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (713) Nov 6 00:22:22.369967 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:22:22.370007 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:22.754322 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:22:22.754420 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:22:22.755444 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:22:22.790555 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:22:22.794427 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:22.797608 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:22:22.801009 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:22:22.809084 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:22:22.829984 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (736) Nov 6 00:22:22.833261 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:22.833304 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:22.880270 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:22:22.880307 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:22:22.880382 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:22:22.887020 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:22.888286 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:22:22.893863 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:22:22.903045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:22.908838 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:22.941048 systemd-networkd[882]: lo: Link UP Nov 6 00:22:22.941058 systemd-networkd[882]: lo: Gained carrier Nov 6 00:22:22.948229 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 6 00:22:22.942018 systemd-networkd[882]: Enumeration completed Nov 6 00:22:22.952506 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:22:22.952707 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fab60a eth0: Data path switched to VF: enP30832s1 Nov 6 00:22:22.942386 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:22.942389 systemd-networkd[882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:22.942666 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:22.945500 systemd[1]: Reached target network.target - Network. Nov 6 00:22:22.953379 systemd-networkd[882]: enP30832s1: Link UP Nov 6 00:22:22.953451 systemd-networkd[882]: eth0: Link UP Nov 6 00:22:22.953598 systemd-networkd[882]: eth0: Gained carrier Nov 6 00:22:22.953609 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:22.960253 systemd-networkd[882]: enP30832s1: Gained carrier Nov 6 00:22:22.969983 systemd-networkd[882]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:22:23.923174 ignition[880]: Ignition 2.22.0 Nov 6 00:22:23.923186 ignition[880]: Stage: fetch-offline Nov 6 00:22:23.925104 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:23.923300 ignition[880]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:23.923307 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:23.923406 ignition[880]: parsed url from cmdline: "" Nov 6 00:22:23.923409 ignition[880]: no config URL provided Nov 6 00:22:23.923414 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:22:23.923419 ignition[880]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:22:23.923423 ignition[880]: failed to fetch config: resource requires networking Nov 6 00:22:23.923748 ignition[880]: Ignition finished successfully Nov 6 00:22:23.939788 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:22:23.978184 ignition[893]: Ignition 2.22.0 Nov 6 00:22:23.978194 ignition[893]: Stage: fetch Nov 6 00:22:23.978409 ignition[893]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:23.978416 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:23.978491 ignition[893]: parsed url from cmdline: "" Nov 6 00:22:23.978494 ignition[893]: no config URL provided Nov 6 00:22:23.978499 ignition[893]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:22:23.978505 ignition[893]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:22:23.978533 ignition[893]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 6 00:22:24.040300 ignition[893]: GET result: OK Nov 6 00:22:24.040385 ignition[893]: config has been read from IMDS userdata Nov 6 00:22:24.040410 ignition[893]: parsing config with SHA512: e38739d025c22e22824dc18f05f667d8163c1fa8176136db8d329adca0fc837290ea9ef17ff51edcb4207414757c77349a7cc99ef6afbab6c27e312bdc543001 Nov 6 00:22:24.044075 unknown[893]: fetched base config from "system" Nov 6 00:22:24.044085 unknown[893]: fetched base config from "system" Nov 6 00:22:24.044408 ignition[893]: fetch: fetch complete Nov 6 00:22:24.044092 unknown[893]: fetched user config from "azure" Nov 6 00:22:24.044412 ignition[893]: fetch: fetch passed Nov 6 00:22:24.046723 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:22:24.044450 ignition[893]: Ignition finished successfully Nov 6 00:22:24.059430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:22:24.091376 ignition[900]: Ignition 2.22.0 Nov 6 00:22:24.091387 ignition[900]: Stage: kargs Nov 6 00:22:24.092791 ignition[900]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:24.094876 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:22:24.092798 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:24.093375 ignition[900]: kargs: kargs passed Nov 6 00:22:24.093416 ignition[900]: Ignition finished successfully Nov 6 00:22:24.101516 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:22:24.126223 ignition[907]: Ignition 2.22.0 Nov 6 00:22:24.126232 ignition[907]: Stage: disks Nov 6 00:22:24.126458 ignition[907]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:24.129660 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:22:24.126467 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:24.133190 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:24.127564 ignition[907]: disks: disks passed Nov 6 00:22:24.134476 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:22:24.127602 ignition[907]: Ignition finished successfully Nov 6 00:22:24.134502 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:24.134524 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:22:24.134764 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:22:24.135495 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:22:24.217289 systemd-fsck[916]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 6 00:22:24.220779 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:22:24.226276 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:22:24.629132 systemd-networkd[882]: eth0: Gained IPv6LL Nov 6 00:22:26.386972 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:22:26.387935 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:22:26.391100 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:26.426773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:26.445308 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:22:26.457082 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (925) Nov 6 00:22:26.457112 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:26.457123 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:26.454048 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 00:22:26.468211 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:22:26.468236 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:22:26.468248 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:22:26.459061 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:22:26.459089 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:26.465461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:26.468536 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:22:26.476217 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:22:26.946556 coreos-metadata[927]: Nov 06 00:22:26.946 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 00:22:26.950854 coreos-metadata[927]: Nov 06 00:22:26.950 INFO Fetch successful Nov 6 00:22:26.952670 coreos-metadata[927]: Nov 06 00:22:26.952 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 6 00:22:26.962071 coreos-metadata[927]: Nov 06 00:22:26.962 INFO Fetch successful Nov 6 00:22:26.964037 coreos-metadata[927]: Nov 06 00:22:26.962 INFO wrote hostname ci-4459.1.0-n-288d7afe99 to /sysroot/etc/hostname Nov 6 00:22:26.964079 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:22:27.228668 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:22:27.261324 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:22:27.294126 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:22:27.313332 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:22:28.290296 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:28.294647 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:22:28.303216 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:22:28.312019 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:22:28.314086 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:28.339511 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:22:28.341449 ignition[1044]: INFO : Ignition 2.22.0 Nov 6 00:22:28.341449 ignition[1044]: INFO : Stage: mount Nov 6 00:22:28.341449 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:28.341449 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:28.341449 ignition[1044]: INFO : mount: mount passed Nov 6 00:22:28.341449 ignition[1044]: INFO : Ignition finished successfully Nov 6 00:22:28.348269 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:22:28.357190 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:22:28.381061 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:28.399972 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1055) Nov 6 00:22:28.400006 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:28.402017 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:28.407545 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:22:28.407588 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:22:28.408656 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:22:28.410469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:28.438896 ignition[1072]: INFO : Ignition 2.22.0 Nov 6 00:22:28.438896 ignition[1072]: INFO : Stage: files Nov 6 00:22:28.443996 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:28.443996 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:28.443996 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:22:28.454367 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:22:28.454367 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:22:28.528115 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:22:28.530290 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:22:28.534026 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:22:28.530540 unknown[1072]: wrote ssh authorized keys file for user: core Nov 6 00:22:28.614105 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:28.619035 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:22:28.658724 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:22:28.715346 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:28.718208 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:22:28.718208 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 00:22:29.017993 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 00:22:29.306352 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:22:29.306352 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:29.313658 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:29.313658 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:29.313658 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:29.313658 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:29.313658 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:29.313658 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:29.313658 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:29.334761 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:29.334761 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:29.334761 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:22:29.334761 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:22:29.334761 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:22:29.334761 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 00:22:29.628644 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 00:22:30.800342 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:22:30.800342 ignition[1072]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 00:22:30.847392 ignition[1072]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:30.854089 ignition[1072]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:30.854089 ignition[1072]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 00:22:30.854089 ignition[1072]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:30.864065 ignition[1072]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:30.864065 ignition[1072]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:30.864065 ignition[1072]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:30.864065 ignition[1072]: INFO : files: files passed Nov 6 00:22:30.864065 ignition[1072]: INFO : Ignition finished successfully Nov 6 00:22:30.862666 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:22:30.874083 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:22:30.879153 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:22:30.890292 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:22:30.905030 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:30.905030 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:30.890378 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:22:30.918454 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:30.899934 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:30.910508 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:22:30.921425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:22:30.965184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:22:30.965265 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:22:30.968930 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:22:30.974025 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:22:30.976788 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:22:30.977434 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:22:31.018604 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:31.021058 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:22:31.036394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:31.041310 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:31.043341 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:22:31.045347 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:22:31.045494 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:31.048993 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:22:31.057175 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:22:31.057490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:22:31.057810 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:31.058124 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:31.058710 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:31.059318 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:22:31.059908 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:31.060297 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:22:31.060660 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:22:31.060922 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:22:31.061559 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:22:31.061677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:31.062455 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:31.063111 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:31.063749 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:22:31.144563 ignition[1125]: INFO : Ignition 2.22.0 Nov 6 00:22:31.144563 ignition[1125]: INFO : Stage: umount Nov 6 00:22:31.144563 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:31.144563 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:31.144563 ignition[1125]: INFO : umount: umount passed Nov 6 00:22:31.144563 ignition[1125]: INFO : Ignition finished successfully Nov 6 00:22:31.065086 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:31.083434 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:22:31.083539 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:31.091303 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:22:31.091400 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:31.098337 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:22:31.098425 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:22:31.105791 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 00:22:31.105884 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:22:31.111322 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:22:31.118063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:22:31.118233 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:31.135751 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:22:31.141135 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:22:31.141304 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:31.182115 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:22:31.182226 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:31.191516 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:22:31.191615 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:22:31.193473 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:22:31.193678 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:22:31.193821 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:22:31.193911 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:22:31.194581 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:22:31.194666 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:22:31.195017 systemd[1]: Stopped target network.target - Network. Nov 6 00:22:31.195660 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:22:31.195745 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:31.198272 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:22:31.198574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:22:31.206006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:31.214659 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:22:31.221748 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:22:31.232485 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:22:31.232516 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:31.237035 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:22:31.237070 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:31.239664 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:22:31.239709 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:22:31.242712 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:22:31.242746 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:31.246117 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:22:31.249051 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:22:31.253155 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:22:31.253695 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:22:31.253771 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:22:31.254526 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:22:31.254593 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:22:31.256782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:22:31.256829 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:31.264284 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:22:31.329195 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fab60a eth0: Data path switched from VF: enP30832s1 Nov 6 00:22:31.333301 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:22:31.264375 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:22:31.268137 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:22:31.268495 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:22:31.271914 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:22:31.271947 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:31.278432 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:22:31.285012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:22:31.285062 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:31.290630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:31.298840 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:22:31.298934 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:22:31.305854 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:22:31.311636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:22:31.311722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:31.316640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:22:31.316686 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:31.328608 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:22:31.328661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:31.334331 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:22:31.334390 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:22:31.334762 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:22:31.334910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:31.336340 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:22:31.336392 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:31.336609 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:22:31.336976 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:31.337220 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:22:31.337258 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:31.337494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:22:31.337523 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:31.337780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:22:31.337812 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:31.338747 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:22:31.338968 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:22:31.339005 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:31.341425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:22:31.341476 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:31.351657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:31.351709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:31.374668 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:22:31.374706 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:22:31.374733 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:22:31.374924 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:22:31.375434 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:22:31.381681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:22:31.381750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:22:31.388490 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:22:31.395227 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:22:31.432765 systemd[1]: Switching root. Nov 6 00:22:31.535702 systemd-journald[186]: Journal stopped Nov 6 00:22:39.322750 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Nov 6 00:22:39.322777 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:22:39.322792 kernel: SELinux: policy capability open_perms=1 Nov 6 00:22:39.322801 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:22:39.322809 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:22:39.322817 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:22:39.322827 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:22:39.322836 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:22:39.322846 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:22:39.322855 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:22:39.322864 kernel: audit: type=1403 audit(1762388553.131:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:22:39.322874 systemd[1]: Successfully loaded SELinux policy in 198.390ms. Nov 6 00:22:39.322884 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.994ms. Nov 6 00:22:39.322895 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:39.322908 systemd[1]: Detected virtualization microsoft. Nov 6 00:22:39.322918 systemd[1]: Detected architecture x86-64. Nov 6 00:22:39.322927 systemd[1]: Detected first boot. Nov 6 00:22:39.322937 systemd[1]: Hostname set to . Nov 6 00:22:39.322947 systemd[1]: Initializing machine ID from random generator. Nov 6 00:22:39.322970 zram_generator::config[1168]: No configuration found. Nov 6 00:22:39.322982 kernel: Guest personality initialized and is inactive Nov 6 00:22:39.322992 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Nov 6 00:22:39.323001 kernel: Initialized host personality Nov 6 00:22:39.323010 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:22:39.323019 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:22:39.323030 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:22:39.323040 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:22:39.323051 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:22:39.323060 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:22:39.323070 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:22:39.323081 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:22:39.323090 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:22:39.323100 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:22:39.323110 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:22:39.323122 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:22:39.323131 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:22:39.323141 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:22:39.323151 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:39.323161 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:39.323171 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:22:39.323184 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:22:39.323195 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:22:39.323205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:39.323217 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:22:39.323226 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:39.323235 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:39.323244 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:22:39.323254 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:22:39.323264 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:39.323272 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:22:39.323282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:39.323291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:39.323300 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:39.323310 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:39.323319 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:22:39.323328 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:22:39.323338 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:22:39.323350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:39.323359 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:39.323369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:39.323378 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:22:39.323387 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:22:39.323397 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:22:39.323408 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:22:39.323418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:39.323428 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:22:39.323438 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:22:39.323448 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:22:39.323459 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:22:39.323469 systemd[1]: Reached target machines.target - Containers. Nov 6 00:22:39.323479 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:22:39.323492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:39.323503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:39.323514 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:22:39.323524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:39.323533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:39.323544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:39.323554 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:22:39.323566 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:39.323577 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:22:39.323588 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:22:39.323597 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:22:39.323607 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:22:39.323618 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:22:39.323646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:39.323657 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:39.323667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:39.323677 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:39.323689 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:22:39.323699 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:22:39.323710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:39.323720 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:22:39.323730 systemd[1]: Stopped verity-setup.service. Nov 6 00:22:39.323741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:39.323751 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:22:39.323761 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:22:39.323772 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:22:39.323782 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:22:39.323792 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:22:39.323804 kernel: loop: module loaded Nov 6 00:22:39.323814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:22:39.323845 systemd-journald[1254]: Collecting audit messages is disabled. Nov 6 00:22:39.323870 kernel: fuse: init (API version 7.41) Nov 6 00:22:39.323880 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:22:39.323891 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:39.323902 systemd-journald[1254]: Journal started Nov 6 00:22:39.323925 systemd-journald[1254]: Runtime Journal (/run/log/journal/46f828e49d4745959abc96dc5b523022) is 8M, max 158.6M, 150.6M free. Nov 6 00:22:38.769185 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:22:38.780507 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 6 00:22:38.780928 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:22:39.332967 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:39.335498 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:22:39.335647 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:22:39.337862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:39.338070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:39.340170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:39.340307 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:39.341912 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:22:39.342085 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:22:39.345174 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:39.345310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:39.347228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:39.350308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:39.354214 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:22:39.362085 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:39.365791 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:22:39.371029 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:22:39.375035 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:22:39.375066 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:39.382406 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:22:39.393056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:22:39.409163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:39.436229 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:22:39.447075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:22:39.449899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:39.450822 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:22:39.453949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:39.455051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:39.467852 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:22:39.471624 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:22:39.475943 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:22:39.482220 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:39.485724 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:22:39.491980 kernel: ACPI: bus type drm_connector registered Nov 6 00:22:39.493143 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:22:39.496111 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:39.496412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:39.509300 systemd-journald[1254]: Time spent on flushing to /var/log/journal/46f828e49d4745959abc96dc5b523022 is 12.637ms for 993 entries. Nov 6 00:22:39.509300 systemd-journald[1254]: System Journal (/var/log/journal/46f828e49d4745959abc96dc5b523022) is 8M, max 2.6G, 2.6G free. Nov 6 00:22:39.564380 systemd-journald[1254]: Received client request to flush runtime journal. Nov 6 00:22:39.564413 kernel: loop0: detected capacity change from 0 to 128016 Nov 6 00:22:39.544027 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:22:39.547344 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:22:39.557802 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:22:39.565594 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:22:39.631020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:39.665323 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:22:39.782370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:22:40.063973 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:22:40.098977 kernel: loop1: detected capacity change from 0 to 219144 Nov 6 00:22:40.159975 kernel: loop2: detected capacity change from 0 to 110984 Nov 6 00:22:40.422044 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:22:40.425114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:40.575727 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Nov 6 00:22:40.575744 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Nov 6 00:22:40.578549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:40.679971 kernel: loop3: detected capacity change from 0 to 27936 Nov 6 00:22:40.690860 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:22:40.695213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:40.723937 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Nov 6 00:22:41.203978 kernel: loop4: detected capacity change from 0 to 128016 Nov 6 00:22:41.215978 kernel: loop5: detected capacity change from 0 to 219144 Nov 6 00:22:41.229974 kernel: loop6: detected capacity change from 0 to 110984 Nov 6 00:22:41.241974 kernel: loop7: detected capacity change from 0 to 27936 Nov 6 00:22:41.251542 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 6 00:22:41.251977 (sd-merge)[1336]: Merged extensions into '/usr'. Nov 6 00:22:41.255934 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:22:41.255947 systemd[1]: Reloading... Nov 6 00:22:41.305974 zram_generator::config[1358]: No configuration found. Nov 6 00:22:41.512706 systemd[1]: Reloading finished in 256 ms. Nov 6 00:22:41.532767 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:22:41.534933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:41.549007 systemd[1]: Starting ensure-sysext.service... Nov 6 00:22:41.554075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:41.558726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:41.597616 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:22:41.597869 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:22:41.598257 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:22:41.599469 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:22:41.600668 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:22:41.601053 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Nov 6 00:22:41.601379 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Nov 6 00:22:41.608571 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:22:41.636084 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:22:41.641092 systemd[1]: Reload requested from client PID 1441 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:22:41.641178 systemd[1]: Reloading... Nov 6 00:22:41.659811 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:41.659824 systemd-tmpfiles[1448]: Skipping /boot Nov 6 00:22:41.668777 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:41.668796 systemd-tmpfiles[1448]: Skipping /boot Nov 6 00:22:41.709973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:22:41.755720 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:22:41.755775 zram_generator::config[1494]: No configuration found. Nov 6 00:22:41.801978 kernel: hv_vmbus: registering driver hv_balloon Nov 6 00:22:41.824971 kernel: hv_vmbus: registering driver hyperv_fb Nov 6 00:22:41.825023 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 6 00:22:41.836991 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 6 00:22:41.846982 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 6 00:22:41.847042 kernel: Console: switching to colour dummy device 80x25 Nov 6 00:22:41.848282 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:22:42.136949 systemd[1]: Reloading finished in 495 ms. Nov 6 00:22:42.141589 systemd-networkd[1447]: lo: Link UP Nov 6 00:22:42.141597 systemd-networkd[1447]: lo: Gained carrier Nov 6 00:22:42.142812 systemd-networkd[1447]: Enumeration completed Nov 6 00:22:42.143948 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:42.143984 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:42.146980 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 6 00:22:42.154969 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:22:42.155172 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fab60a eth0: Data path switched to VF: enP30832s1 Nov 6 00:22:42.153729 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:22:42.159243 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:42.162021 systemd-networkd[1447]: enP30832s1: Link UP Nov 6 00:22:42.162104 systemd-networkd[1447]: eth0: Link UP Nov 6 00:22:42.162107 systemd-networkd[1447]: eth0: Gained carrier Nov 6 00:22:42.162123 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:42.168198 systemd-networkd[1447]: enP30832s1: Gained carrier Nov 6 00:22:42.173019 systemd-networkd[1447]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:22:42.176019 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:42.248417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 6 00:22:42.259668 systemd[1]: Finished ensure-sysext.service. Nov 6 00:22:42.265459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:42.268151 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:22:42.303400 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:22:42.306091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:42.307263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:42.313233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:42.321122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:42.327608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:42.330035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:42.332042 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:22:42.335031 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:42.337151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:22:42.343252 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:22:42.351864 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:22:42.357345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:42.360129 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:22:42.367238 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:22:42.378053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:42.380999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:42.384631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:42.388078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:42.391619 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:42.391786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:42.395321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:42.395531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:42.399289 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 6 00:22:42.402186 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:42.402390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:42.412609 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:42.412767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:42.419348 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:22:42.451476 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:22:42.459150 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:22:42.496077 systemd-resolved[1604]: Positive Trust Anchors: Nov 6 00:22:42.496090 systemd-resolved[1604]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:42.496122 systemd-resolved[1604]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:42.527372 systemd-resolved[1604]: Using system hostname 'ci-4459.1.0-n-288d7afe99'. Nov 6 00:22:42.535267 augenrules[1636]: No rules Nov 6 00:22:42.536120 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:22:42.536336 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:22:42.555529 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:42.555666 systemd[1]: Reached target network.target - Network. Nov 6 00:22:42.555811 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:42.681086 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:22:43.743202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:43.766118 systemd-networkd[1447]: eth0: Gained IPv6LL Nov 6 00:22:43.768305 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:22:43.773235 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:22:45.115723 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:22:45.120212 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:22:47.780074 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:22:47.789731 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:22:47.794194 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:22:47.824673 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:22:47.826717 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:22:47.830179 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:22:47.834045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:22:47.838012 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:22:47.839909 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:22:47.843048 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:22:47.844917 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:22:47.846725 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:22:47.846754 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:22:47.848060 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:22:47.879445 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:22:47.883096 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:22:47.886720 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:22:47.889048 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:22:47.893021 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:22:47.901462 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:22:47.903206 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:22:47.906561 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:22:47.908823 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:22:47.910327 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:22:47.913067 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:22:47.913094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:22:47.941841 systemd[1]: Starting chronyd.service - NTP client/server... Nov 6 00:22:47.945118 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:22:47.952177 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:22:47.956796 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:22:47.960169 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:22:47.966713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:22:47.971680 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:22:47.974438 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:22:47.975327 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:22:47.978264 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 6 00:22:47.981248 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 6 00:22:47.983727 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 6 00:22:47.987115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:47.996094 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:22:48.000557 jq[1658]: false Nov 6 00:22:48.005452 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:22:48.011903 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:22:48.017422 KVP[1664]: KVP starting; pid is:1664 Nov 6 00:22:48.018997 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:22:48.025711 KVP[1664]: KVP LIC Version: 3.1 Nov 6 00:22:48.026017 kernel: hv_utils: KVP IC version 4.0 Nov 6 00:22:48.026110 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:22:48.034923 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:22:48.038050 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:22:48.039533 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:22:48.040669 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:22:48.045172 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:22:48.051842 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:22:48.056140 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:22:48.056341 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:22:48.058606 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:22:48.058797 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:22:48.082011 jq[1679]: true Nov 6 00:22:48.082293 extend-filesystems[1659]: Found /dev/nvme0n1p6 Nov 6 00:22:48.083268 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:22:48.083685 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:22:48.088503 chronyd[1653]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 6 00:22:48.108052 (ntainerd)[1693]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:22:48.112996 google_oslogin_nss_cache[1663]: oslogin_cache_refresh[1663]: Refreshing passwd entry cache Nov 6 00:22:48.111816 oslogin_cache_refresh[1663]: Refreshing passwd entry cache Nov 6 00:22:48.118070 jq[1696]: true Nov 6 00:22:48.121251 extend-filesystems[1659]: Found /dev/nvme0n1p9 Nov 6 00:22:48.125718 extend-filesystems[1659]: Checking size of /dev/nvme0n1p9 Nov 6 00:22:48.138914 google_oslogin_nss_cache[1663]: oslogin_cache_refresh[1663]: Failure getting users, quitting Nov 6 00:22:48.138914 google_oslogin_nss_cache[1663]: oslogin_cache_refresh[1663]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:22:48.138914 google_oslogin_nss_cache[1663]: oslogin_cache_refresh[1663]: Refreshing group entry cache Nov 6 00:22:48.138073 oslogin_cache_refresh[1663]: Failure getting users, quitting Nov 6 00:22:48.138088 oslogin_cache_refresh[1663]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:22:48.138132 oslogin_cache_refresh[1663]: Refreshing group entry cache Nov 6 00:22:48.139270 chronyd[1653]: Timezone right/UTC failed leap second check, ignoring Nov 6 00:22:48.139480 systemd[1]: Started chronyd.service - NTP client/server. Nov 6 00:22:48.139402 chronyd[1653]: Loaded seccomp filter (level 2) Nov 6 00:22:48.154748 update_engine[1677]: I20251106 00:22:48.150709 1677 main.cc:92] Flatcar Update Engine starting Nov 6 00:22:48.154558 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:22:48.151399 oslogin_cache_refresh[1663]: Failure getting groups, quitting Nov 6 00:22:48.155153 google_oslogin_nss_cache[1663]: oslogin_cache_refresh[1663]: Failure getting groups, quitting Nov 6 00:22:48.155153 google_oslogin_nss_cache[1663]: oslogin_cache_refresh[1663]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:22:48.151409 oslogin_cache_refresh[1663]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:22:48.155301 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:22:48.159399 tar[1685]: linux-amd64/LICENSE Nov 6 00:22:48.159399 tar[1685]: linux-amd64/helm Nov 6 00:22:48.178026 extend-filesystems[1659]: Old size kept for /dev/nvme0n1p9 Nov 6 00:22:48.178580 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:22:48.178787 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:22:48.191645 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:22:48.218808 systemd-logind[1676]: New seat seat0. Nov 6 00:22:48.225251 systemd-logind[1676]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:22:48.225408 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:22:48.331492 bash[1737]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:22:48.332562 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:22:48.343634 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:22:48.526223 dbus-daemon[1656]: [system] SELinux support is enabled Nov 6 00:22:48.526605 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:22:48.533400 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:22:48.533431 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:22:48.536879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:22:48.536901 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:22:48.542709 dbus-daemon[1656]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 00:22:48.545834 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:22:48.550053 update_engine[1677]: I20251106 00:22:48.546181 1677 update_check_scheduler.cc:74] Next update check in 6m14s Nov 6 00:22:48.554981 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:22:48.637362 coreos-metadata[1655]: Nov 06 00:22:48.637 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 00:22:48.642458 coreos-metadata[1655]: Nov 06 00:22:48.641 INFO Fetch successful Nov 6 00:22:48.642458 coreos-metadata[1655]: Nov 06 00:22:48.642 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 6 00:22:48.646552 coreos-metadata[1655]: Nov 06 00:22:48.646 INFO Fetch successful Nov 6 00:22:48.646552 coreos-metadata[1655]: Nov 06 00:22:48.646 INFO Fetching http://168.63.129.16/machine/010809e4-11f6-40c2-80d9-b2027ebb5ed5/d6f75ff7%2D259c%2D4292%2Da40e%2D3dc5d9d8a063.%5Fci%2D4459.1.0%2Dn%2D288d7afe99?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 6 00:22:48.648046 coreos-metadata[1655]: Nov 06 00:22:48.647 INFO Fetch successful Nov 6 00:22:48.651274 coreos-metadata[1655]: Nov 06 00:22:48.650 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 6 00:22:48.664976 coreos-metadata[1655]: Nov 06 00:22:48.662 INFO Fetch successful Nov 6 00:22:48.717732 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:22:48.720676 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:22:48.856628 sshd_keygen[1701]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:22:48.885338 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:22:48.892749 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:22:48.900410 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 6 00:22:48.904661 tar[1685]: linux-amd64/README.md Nov 6 00:22:48.914763 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:22:48.915606 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:22:48.921379 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:22:48.930163 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:22:48.935548 locksmithd[1765]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:22:48.944438 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 6 00:22:48.958452 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:22:48.964115 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:22:48.969097 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:22:48.971857 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:22:49.026116 containerd[1693]: time="2025-11-06T00:22:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:22:49.026910 containerd[1693]: time="2025-11-06T00:22:49.026885651Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:22:49.035752 containerd[1693]: time="2025-11-06T00:22:49.035727477Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.156µs" Nov 6 00:22:49.035810 containerd[1693]: time="2025-11-06T00:22:49.035802353Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:22:49.035850 containerd[1693]: time="2025-11-06T00:22:49.035844354Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:22:49.036002 containerd[1693]: time="2025-11-06T00:22:49.035991085Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:22:49.036260 containerd[1693]: time="2025-11-06T00:22:49.036252022Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:22:49.036301 containerd[1693]: time="2025-11-06T00:22:49.036293852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036382 containerd[1693]: time="2025-11-06T00:22:49.036373874Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036410 containerd[1693]: time="2025-11-06T00:22:49.036405150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036584 containerd[1693]: time="2025-11-06T00:22:49.036575861Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036608 containerd[1693]: time="2025-11-06T00:22:49.036603246Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036635 containerd[1693]: time="2025-11-06T00:22:49.036627567Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036667 containerd[1693]: time="2025-11-06T00:22:49.036660560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036738 containerd[1693]: time="2025-11-06T00:22:49.036732725Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036873 containerd[1693]: time="2025-11-06T00:22:49.036867193Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036911 containerd[1693]: time="2025-11-06T00:22:49.036904773Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:22:49.036933 containerd[1693]: time="2025-11-06T00:22:49.036928864Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:22:49.037004 containerd[1693]: time="2025-11-06T00:22:49.036995589Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:22:49.037175 containerd[1693]: time="2025-11-06T00:22:49.037167690Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:22:49.037228 containerd[1693]: time="2025-11-06T00:22:49.037223236Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:22:49.048376 containerd[1693]: time="2025-11-06T00:22:49.048326728Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:22:49.048527 containerd[1693]: time="2025-11-06T00:22:49.048449930Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:22:49.048527 containerd[1693]: time="2025-11-06T00:22:49.048472687Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048609819Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048631075Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048643140Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048657614Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048669224Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048680924Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048690893Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048700067Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048712486Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048809253Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048826283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048840154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048850514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:22:49.049006 containerd[1693]: time="2025-11-06T00:22:49.048860112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:22:49.049306 containerd[1693]: time="2025-11-06T00:22:49.048870498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:22:49.049306 containerd[1693]: time="2025-11-06T00:22:49.048882057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:22:49.049306 containerd[1693]: time="2025-11-06T00:22:49.048892261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:22:49.049306 containerd[1693]: time="2025-11-06T00:22:49.048905396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:22:49.049306 containerd[1693]: time="2025-11-06T00:22:49.048915467Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:22:49.049306 containerd[1693]: time="2025-11-06T00:22:49.048925122Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:22:49.049498 containerd[1693]: time="2025-11-06T00:22:49.049461753Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:22:49.049498 containerd[1693]: time="2025-11-06T00:22:49.049477854Z" level=info msg="Start snapshots syncer" Nov 6 00:22:49.049613 containerd[1693]: time="2025-11-06T00:22:49.049561381Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:22:49.049963 containerd[1693]: time="2025-11-06T00:22:49.049865628Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:22:49.049963 containerd[1693]: time="2025-11-06T00:22:49.049914946Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:22:49.050214 containerd[1693]: time="2025-11-06T00:22:49.050139022Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:22:49.050337 containerd[1693]: time="2025-11-06T00:22:49.050325946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:22:49.050425 containerd[1693]: time="2025-11-06T00:22:49.050414341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:22:49.050459 containerd[1693]: time="2025-11-06T00:22:49.050453159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:22:49.050484 containerd[1693]: time="2025-11-06T00:22:49.050479123Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:22:49.050511 containerd[1693]: time="2025-11-06T00:22:49.050506164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050531369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050540694Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050558812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050566750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050574610Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050599534Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050609734Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050616000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050622256Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050627190Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050633846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050641566Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050656308Z" level=info msg="runtime interface created" Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050661369Z" level=info msg="created NRI interface" Nov 6 00:22:49.050966 containerd[1693]: time="2025-11-06T00:22:49.050669405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:22:49.051277 containerd[1693]: time="2025-11-06T00:22:49.050682709Z" level=info msg="Connect containerd service" Nov 6 00:22:49.051277 containerd[1693]: time="2025-11-06T00:22:49.050710070Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:22:49.051757 containerd[1693]: time="2025-11-06T00:22:49.051739356Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:22:49.434076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:49.610576 containerd[1693]: time="2025-11-06T00:22:49.610435047Z" level=info msg="Start subscribing containerd event" Nov 6 00:22:49.610576 containerd[1693]: time="2025-11-06T00:22:49.610500049Z" level=info msg="Start recovering state" Nov 6 00:22:49.610684 containerd[1693]: time="2025-11-06T00:22:49.610614234Z" level=info msg="Start event monitor" Nov 6 00:22:49.610684 containerd[1693]: time="2025-11-06T00:22:49.610626044Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:22:49.610684 containerd[1693]: time="2025-11-06T00:22:49.610633145Z" level=info msg="Start streaming server" Nov 6 00:22:49.610684 containerd[1693]: time="2025-11-06T00:22:49.610655994Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:22:49.610684 containerd[1693]: time="2025-11-06T00:22:49.610662865Z" level=info msg="runtime interface starting up..." Nov 6 00:22:49.610684 containerd[1693]: time="2025-11-06T00:22:49.610671035Z" level=info msg="starting plugins..." Nov 6 00:22:49.610684 containerd[1693]: time="2025-11-06T00:22:49.610682908Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:22:49.610996 containerd[1693]: time="2025-11-06T00:22:49.610976977Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:22:49.611105 containerd[1693]: time="2025-11-06T00:22:49.611086298Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:22:49.612359 containerd[1693]: time="2025-11-06T00:22:49.612319130Z" level=info msg="containerd successfully booted in 0.586595s" Nov 6 00:22:49.612467 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:22:49.615267 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:22:49.618622 systemd[1]: Startup finished in 3.141s (kernel) + 14.130s (initrd) + 16.683s (userspace) = 33.955s. Nov 6 00:22:49.619246 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:50.102340 kubelet[1822]: E1106 00:22:50.102290 1822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:50.104099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:50.104241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:50.104686 systemd[1]: kubelet.service: Consumed 876ms CPU time, 255.9M memory peak. Nov 6 00:22:50.186582 login[1805]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:22:50.188421 login[1806]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:22:50.196548 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:22:50.198344 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:22:50.200540 systemd-logind[1676]: New session 2 of user core. Nov 6 00:22:50.204866 systemd-logind[1676]: New session 1 of user core. Nov 6 00:22:50.226420 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:22:50.228323 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:22:50.255938 (systemd)[1839]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:22:50.257704 systemd-logind[1676]: New session c1 of user core. Nov 6 00:22:50.578246 systemd[1839]: Queued start job for default target default.target. Nov 6 00:22:50.582789 systemd[1839]: Created slice app.slice - User Application Slice. Nov 6 00:22:50.582821 systemd[1839]: Reached target paths.target - Paths. Nov 6 00:22:50.582859 systemd[1839]: Reached target timers.target - Timers. Nov 6 00:22:50.583729 systemd[1839]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:22:50.595529 systemd[1839]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:22:50.595619 systemd[1839]: Reached target sockets.target - Sockets. Nov 6 00:22:50.595705 systemd[1839]: Reached target basic.target - Basic System. Nov 6 00:22:50.595763 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:22:50.596088 systemd[1839]: Reached target default.target - Main User Target. Nov 6 00:22:50.596116 systemd[1839]: Startup finished in 333ms. Nov 6 00:22:50.602114 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:22:50.603130 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:22:50.977356 waagent[1803]: 2025-11-06T00:22:50.977256Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 6 00:22:50.977703 waagent[1803]: 2025-11-06T00:22:50.977660Z INFO Daemon Daemon OS: flatcar 4459.1.0 Nov 6 00:22:50.980076 waagent[1803]: 2025-11-06T00:22:50.979817Z INFO Daemon Daemon Python: 3.11.13 Nov 6 00:22:50.981303 waagent[1803]: 2025-11-06T00:22:50.981254Z INFO Daemon Daemon Run daemon Nov 6 00:22:50.982478 waagent[1803]: 2025-11-06T00:22:50.982437Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.0' Nov 6 00:22:50.982803 waagent[1803]: 2025-11-06T00:22:50.982774Z INFO Daemon Daemon Using waagent for provisioning Nov 6 00:22:50.986245 waagent[1803]: 2025-11-06T00:22:50.986204Z INFO Daemon Daemon Activate resource disk Nov 6 00:22:50.987446 waagent[1803]: 2025-11-06T00:22:50.987363Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 6 00:22:50.990734 waagent[1803]: 2025-11-06T00:22:50.990690Z INFO Daemon Daemon Found device: None Nov 6 00:22:50.991804 waagent[1803]: 2025-11-06T00:22:50.991768Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 6 00:22:50.991899 waagent[1803]: 2025-11-06T00:22:50.991857Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 6 00:22:50.992372 waagent[1803]: 2025-11-06T00:22:50.992329Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 00:22:50.992476 waagent[1803]: 2025-11-06T00:22:50.992448Z INFO Daemon Daemon Running default provisioning handler Nov 6 00:22:50.997740 waagent[1803]: 2025-11-06T00:22:50.997473Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 6 00:22:50.998149 waagent[1803]: 2025-11-06T00:22:50.998114Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 6 00:22:50.998258 waagent[1803]: 2025-11-06T00:22:50.998236Z INFO Daemon Daemon cloud-init is enabled: False Nov 6 00:22:50.998315 waagent[1803]: 2025-11-06T00:22:50.998297Z INFO Daemon Daemon Copying ovf-env.xml Nov 6 00:22:51.134517 waagent[1803]: 2025-11-06T00:22:51.134094Z INFO Daemon Daemon Successfully mounted dvd Nov 6 00:22:51.158684 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 6 00:22:51.160563 waagent[1803]: 2025-11-06T00:22:51.160478Z INFO Daemon Daemon Detect protocol endpoint Nov 6 00:22:51.168082 waagent[1803]: 2025-11-06T00:22:51.160648Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 00:22:51.168082 waagent[1803]: 2025-11-06T00:22:51.160892Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 6 00:22:51.168082 waagent[1803]: 2025-11-06T00:22:51.161224Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 6 00:22:51.168082 waagent[1803]: 2025-11-06T00:22:51.161618Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 6 00:22:51.168082 waagent[1803]: 2025-11-06T00:22:51.161921Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 6 00:22:51.175263 waagent[1803]: 2025-11-06T00:22:51.175231Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 6 00:22:51.176408 waagent[1803]: 2025-11-06T00:22:51.175523Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 6 00:22:51.176408 waagent[1803]: 2025-11-06T00:22:51.175728Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 6 00:22:51.267168 waagent[1803]: 2025-11-06T00:22:51.267069Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 6 00:22:51.268501 waagent[1803]: 2025-11-06T00:22:51.267267Z INFO Daemon Daemon Forcing an update of the goal state. Nov 6 00:22:51.273424 waagent[1803]: 2025-11-06T00:22:51.270117Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 00:22:51.283189 waagent[1803]: 2025-11-06T00:22:51.283162Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 6 00:22:51.285728 waagent[1803]: 2025-11-06T00:22:51.283648Z INFO Daemon Nov 6 00:22:51.285728 waagent[1803]: 2025-11-06T00:22:51.283736Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b910d857-8368-43c0-8b37-60d33b459249 eTag: 3958149244772605268 source: Fabric] Nov 6 00:22:51.285728 waagent[1803]: 2025-11-06T00:22:51.284574Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 6 00:22:51.285728 waagent[1803]: 2025-11-06T00:22:51.284966Z INFO Daemon Nov 6 00:22:51.285728 waagent[1803]: 2025-11-06T00:22:51.285179Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 6 00:22:51.293973 waagent[1803]: 2025-11-06T00:22:51.293207Z INFO Daemon Daemon Downloading artifacts profile blob Nov 6 00:22:51.379210 waagent[1803]: 2025-11-06T00:22:51.379165Z INFO Daemon Downloaded certificate {'thumbprint': '278E2A220E55CBFEBD7F3A2D100BF75CF8B9AEB7', 'hasPrivateKey': True} Nov 6 00:22:51.380771 waagent[1803]: 2025-11-06T00:22:51.380740Z INFO Daemon Fetch goal state completed Nov 6 00:22:51.388046 waagent[1803]: 2025-11-06T00:22:51.388001Z INFO Daemon Daemon Starting provisioning Nov 6 00:22:51.389507 waagent[1803]: 2025-11-06T00:22:51.388986Z INFO Daemon Daemon Handle ovf-env.xml. Nov 6 00:22:51.389507 waagent[1803]: 2025-11-06T00:22:51.389192Z INFO Daemon Daemon Set hostname [ci-4459.1.0-n-288d7afe99] Nov 6 00:22:51.419329 waagent[1803]: 2025-11-06T00:22:51.419290Z INFO Daemon Daemon Publish hostname [ci-4459.1.0-n-288d7afe99] Nov 6 00:22:51.421330 waagent[1803]: 2025-11-06T00:22:51.421285Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 6 00:22:51.421583 waagent[1803]: 2025-11-06T00:22:51.421552Z INFO Daemon Daemon Primary interface is [eth0] Nov 6 00:22:51.428532 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:51.428539 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:51.428558 systemd-networkd[1447]: eth0: DHCP lease lost Nov 6 00:22:51.429454 waagent[1803]: 2025-11-06T00:22:51.429413Z INFO Daemon Daemon Create user account if not exists Nov 6 00:22:51.430634 waagent[1803]: 2025-11-06T00:22:51.429623Z INFO Daemon Daemon User core already exists, skip useradd Nov 6 00:22:51.430634 waagent[1803]: 2025-11-06T00:22:51.429792Z INFO Daemon Daemon Configure sudoer Nov 6 00:22:51.434027 waagent[1803]: 2025-11-06T00:22:51.433986Z INFO Daemon Daemon Configure sshd Nov 6 00:22:51.438700 waagent[1803]: 2025-11-06T00:22:51.438659Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 6 00:22:51.444370 waagent[1803]: 2025-11-06T00:22:51.438809Z INFO Daemon Daemon Deploy ssh public key. Nov 6 00:22:51.445035 systemd-networkd[1447]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:22:52.552561 waagent[1803]: 2025-11-06T00:22:52.552514Z INFO Daemon Daemon Provisioning complete Nov 6 00:22:52.564298 waagent[1803]: 2025-11-06T00:22:52.564262Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 6 00:22:52.565800 waagent[1803]: 2025-11-06T00:22:52.565763Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 6 00:22:52.566026 waagent[1803]: 2025-11-06T00:22:52.565999Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 6 00:22:52.672139 waagent[1889]: 2025-11-06T00:22:52.672071Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 6 00:22:52.672406 waagent[1889]: 2025-11-06T00:22:52.672166Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.0 Nov 6 00:22:52.672406 waagent[1889]: 2025-11-06T00:22:52.672207Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 6 00:22:52.672406 waagent[1889]: 2025-11-06T00:22:52.672247Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 6 00:22:52.735806 waagent[1889]: 2025-11-06T00:22:52.735754Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 6 00:22:52.735943 waagent[1889]: 2025-11-06T00:22:52.735916Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:22:52.736006 waagent[1889]: 2025-11-06T00:22:52.735992Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:22:52.742765 waagent[1889]: 2025-11-06T00:22:52.742710Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 00:22:52.757166 waagent[1889]: 2025-11-06T00:22:52.757138Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 6 00:22:52.757498 waagent[1889]: 2025-11-06T00:22:52.757467Z INFO ExtHandler Nov 6 00:22:52.757549 waagent[1889]: 2025-11-06T00:22:52.757524Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3363340f-8600-4c4f-adf5-93d5a0ea0250 eTag: 3958149244772605268 source: Fabric] Nov 6 00:22:52.757751 waagent[1889]: 2025-11-06T00:22:52.757727Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 6 00:22:52.758110 waagent[1889]: 2025-11-06T00:22:52.758081Z INFO ExtHandler Nov 6 00:22:52.758156 waagent[1889]: 2025-11-06T00:22:52.758126Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 6 00:22:52.765712 waagent[1889]: 2025-11-06T00:22:52.765687Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 6 00:22:52.824220 waagent[1889]: 2025-11-06T00:22:52.824144Z INFO ExtHandler Downloaded certificate {'thumbprint': '278E2A220E55CBFEBD7F3A2D100BF75CF8B9AEB7', 'hasPrivateKey': True} Nov 6 00:22:52.824520 waagent[1889]: 2025-11-06T00:22:52.824491Z INFO ExtHandler Fetch goal state completed Nov 6 00:22:52.836489 waagent[1889]: 2025-11-06T00:22:52.836444Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 6 00:22:52.840548 waagent[1889]: 2025-11-06T00:22:52.840502Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1889 Nov 6 00:22:52.840662 waagent[1889]: 2025-11-06T00:22:52.840621Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 6 00:22:52.840893 waagent[1889]: 2025-11-06T00:22:52.840871Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 6 00:22:52.841964 waagent[1889]: 2025-11-06T00:22:52.841911Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 6 00:22:52.842268 waagent[1889]: 2025-11-06T00:22:52.842239Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 6 00:22:52.842364 waagent[1889]: 2025-11-06T00:22:52.842341Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 6 00:22:52.842748 waagent[1889]: 2025-11-06T00:22:52.842722Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 6 00:22:52.914325 waagent[1889]: 2025-11-06T00:22:52.914294Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 6 00:22:52.914479 waagent[1889]: 2025-11-06T00:22:52.914455Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 6 00:22:52.919946 waagent[1889]: 2025-11-06T00:22:52.919584Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 6 00:22:52.924595 systemd[1]: Reload requested from client PID 1904 ('systemctl') (unit waagent.service)... Nov 6 00:22:52.924634 systemd[1]: Reloading... Nov 6 00:22:52.986979 zram_generator::config[1940]: No configuration found. Nov 6 00:22:53.174972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 6 00:22:53.182520 systemd[1]: Reloading finished in 257 ms. Nov 6 00:22:53.197482 waagent[1889]: 2025-11-06T00:22:53.197183Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 6 00:22:53.197482 waagent[1889]: 2025-11-06T00:22:53.197299Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 6 00:22:53.559816 waagent[1889]: 2025-11-06T00:22:53.559692Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 6 00:22:53.560101 waagent[1889]: 2025-11-06T00:22:53.560072Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 6 00:22:53.560859 waagent[1889]: 2025-11-06T00:22:53.560815Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 6 00:22:53.561049 waagent[1889]: 2025-11-06T00:22:53.561007Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:22:53.561228 waagent[1889]: 2025-11-06T00:22:53.561090Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:22:53.561292 waagent[1889]: 2025-11-06T00:22:53.561253Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 6 00:22:53.561538 waagent[1889]: 2025-11-06T00:22:53.561509Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 6 00:22:53.561688 waagent[1889]: 2025-11-06T00:22:53.561652Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:22:53.561866 waagent[1889]: 2025-11-06T00:22:53.561845Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 6 00:22:53.561936 waagent[1889]: 2025-11-06T00:22:53.561915Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 6 00:22:53.562088 waagent[1889]: 2025-11-06T00:22:53.562069Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:22:53.562339 waagent[1889]: 2025-11-06T00:22:53.562318Z INFO EnvHandler ExtHandler Configure routes Nov 6 00:22:53.562423 waagent[1889]: 2025-11-06T00:22:53.562392Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 6 00:22:53.562423 waagent[1889]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 6 00:22:53.562423 waagent[1889]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 6 00:22:53.562423 waagent[1889]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 6 00:22:53.562423 waagent[1889]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:22:53.562423 waagent[1889]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:22:53.562423 waagent[1889]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:22:53.562623 waagent[1889]: 2025-11-06T00:22:53.562505Z INFO EnvHandler ExtHandler Gateway:None Nov 6 00:22:53.562623 waagent[1889]: 2025-11-06T00:22:53.562541Z INFO EnvHandler ExtHandler Routes:None Nov 6 00:22:53.563186 waagent[1889]: 2025-11-06T00:22:53.563148Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 6 00:22:53.563643 waagent[1889]: 2025-11-06T00:22:53.563303Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 6 00:22:53.563768 waagent[1889]: 2025-11-06T00:22:53.563735Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 6 00:22:53.569176 waagent[1889]: 2025-11-06T00:22:53.569144Z INFO ExtHandler ExtHandler Nov 6 00:22:53.569248 waagent[1889]: 2025-11-06T00:22:53.569205Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 29b7d9ce-7509-4ef0-b5d2-758a5d5e3a92 correlation f0a2d318-fb32-4a17-8044-26abe937d964 created: 2025-11-06T00:21:34.681053Z] Nov 6 00:22:53.569473 waagent[1889]: 2025-11-06T00:22:53.569450Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 6 00:22:53.569895 waagent[1889]: 2025-11-06T00:22:53.569874Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 6 00:22:53.624224 waagent[1889]: 2025-11-06T00:22:53.624178Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 6 00:22:53.624224 waagent[1889]: Try `iptables -h' or 'iptables --help' for more information.) Nov 6 00:22:53.624565 waagent[1889]: 2025-11-06T00:22:53.624538Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 904201E2-24C5-44C7-8D49-711C0EF1517E;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 6 00:22:53.654533 waagent[1889]: 2025-11-06T00:22:53.654490Z INFO MonitorHandler ExtHandler Network interfaces: Nov 6 00:22:53.654533 waagent[1889]: Executing ['ip', '-a', '-o', 'link']: Nov 6 00:22:53.654533 waagent[1889]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 6 00:22:53.654533 waagent[1889]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fa:b6:0a brd ff:ff:ff:ff:ff:ff\ alias Network Device Nov 6 00:22:53.654533 waagent[1889]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fa:b6:0a brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 6 00:22:53.654533 waagent[1889]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 6 00:22:53.654533 waagent[1889]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 6 00:22:53.654533 waagent[1889]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 6 00:22:53.654533 waagent[1889]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 6 00:22:53.654533 waagent[1889]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 6 00:22:53.654533 waagent[1889]: 2: eth0 inet6 fe80::7e1e:52ff:fefa:b60a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 6 00:22:53.778619 waagent[1889]: 2025-11-06T00:22:53.778572Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 6 00:22:53.778619 waagent[1889]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:22:53.778619 waagent[1889]: pkts bytes target prot opt in out source destination Nov 6 00:22:53.778619 waagent[1889]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:22:53.778619 waagent[1889]: pkts bytes target prot opt in out source destination Nov 6 00:22:53.778619 waagent[1889]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:22:53.778619 waagent[1889]: pkts bytes target prot opt in out source destination Nov 6 00:22:53.778619 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 00:22:53.778619 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 00:22:53.778619 waagent[1889]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 00:22:53.781306 waagent[1889]: 2025-11-06T00:22:53.781260Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 6 00:22:53.781306 waagent[1889]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:22:53.781306 waagent[1889]: pkts bytes target prot opt in out source destination Nov 6 00:22:53.781306 waagent[1889]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:22:53.781306 waagent[1889]: pkts bytes target prot opt in out source destination Nov 6 00:22:53.781306 waagent[1889]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:22:53.781306 waagent[1889]: pkts bytes target prot opt in out source destination Nov 6 00:22:53.781306 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 00:22:53.781306 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 00:22:53.781306 waagent[1889]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 00:23:00.355099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:23:00.356492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:00.882014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:00.885093 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:00.922776 kubelet[2041]: E1106 00:23:00.922741 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:00.925532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:00.925654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:00.925994 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.3M memory peak. Nov 6 00:23:11.076664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:23:11.078080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:11.576342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:11.579587 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:11.613559 kubelet[2056]: E1106 00:23:11.613504 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:11.615199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:11.615345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:11.615700 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.5M memory peak. Nov 6 00:23:11.928522 chronyd[1653]: Selected source PHC0 Nov 6 00:23:21.147580 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:23:21.148760 systemd[1]: Started sshd@0-10.200.8.40:22-10.200.16.10:34746.service - OpenSSH per-connection server daemon (10.200.16.10:34746). Nov 6 00:23:21.825659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:23:21.827196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:21.935792 sshd[2064]: Accepted publickey for core from 10.200.16.10 port 34746 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:21.936842 sshd-session[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:21.940518 systemd-logind[1676]: New session 3 of user core. Nov 6 00:23:21.950091 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:23:22.295087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:22.303203 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:22.335065 kubelet[2076]: E1106 00:23:22.335031 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:22.336491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:22.336617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:22.336899 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110M memory peak. Nov 6 00:23:22.485616 systemd[1]: Started sshd@1-10.200.8.40:22-10.200.16.10:34750.service - OpenSSH per-connection server daemon (10.200.16.10:34750). Nov 6 00:23:23.112554 sshd[2085]: Accepted publickey for core from 10.200.16.10 port 34750 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:23.119500 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:23.123762 systemd-logind[1676]: New session 4 of user core. Nov 6 00:23:23.133087 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:23:23.555888 sshd[2088]: Connection closed by 10.200.16.10 port 34750 Nov 6 00:23:23.556624 sshd-session[2085]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:23.559311 systemd[1]: sshd@1-10.200.8.40:22-10.200.16.10:34750.service: Deactivated successfully. Nov 6 00:23:23.560890 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:23:23.562178 systemd-logind[1676]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:23:23.563275 systemd-logind[1676]: Removed session 4. Nov 6 00:23:23.667544 systemd[1]: Started sshd@2-10.200.8.40:22-10.200.16.10:34758.service - OpenSSH per-connection server daemon (10.200.16.10:34758). Nov 6 00:23:24.305639 sshd[2094]: Accepted publickey for core from 10.200.16.10 port 34758 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:24.306768 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:24.311307 systemd-logind[1676]: New session 5 of user core. Nov 6 00:23:24.318094 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:23:24.761524 sshd[2097]: Connection closed by 10.200.16.10 port 34758 Nov 6 00:23:24.762098 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:24.765183 systemd[1]: sshd@2-10.200.8.40:22-10.200.16.10:34758.service: Deactivated successfully. Nov 6 00:23:24.766637 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:23:24.767388 systemd-logind[1676]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:23:24.768499 systemd-logind[1676]: Removed session 5. Nov 6 00:23:24.883667 systemd[1]: Started sshd@3-10.200.8.40:22-10.200.16.10:34764.service - OpenSSH per-connection server daemon (10.200.16.10:34764). Nov 6 00:23:25.514826 sshd[2103]: Accepted publickey for core from 10.200.16.10 port 34764 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:25.515877 sshd-session[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:25.520351 systemd-logind[1676]: New session 6 of user core. Nov 6 00:23:25.526106 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:23:25.960037 sshd[2106]: Connection closed by 10.200.16.10 port 34764 Nov 6 00:23:25.960589 sshd-session[2103]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:25.963771 systemd[1]: sshd@3-10.200.8.40:22-10.200.16.10:34764.service: Deactivated successfully. Nov 6 00:23:25.965370 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:23:25.966122 systemd-logind[1676]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:23:25.967146 systemd-logind[1676]: Removed session 6. Nov 6 00:23:26.071893 systemd[1]: Started sshd@4-10.200.8.40:22-10.200.16.10:34768.service - OpenSSH per-connection server daemon (10.200.16.10:34768). Nov 6 00:23:26.701236 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 34768 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:26.702362 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:26.706879 systemd-logind[1676]: New session 7 of user core. Nov 6 00:23:26.712106 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:23:27.275026 sudo[2116]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:23:27.275252 sudo[2116]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:27.321714 sudo[2116]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:27.424295 sshd[2115]: Connection closed by 10.200.16.10 port 34768 Nov 6 00:23:27.424872 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:27.427928 systemd[1]: sshd@4-10.200.8.40:22-10.200.16.10:34768.service: Deactivated successfully. Nov 6 00:23:27.429460 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:23:27.430618 systemd-logind[1676]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:23:27.431881 systemd-logind[1676]: Removed session 7. Nov 6 00:23:27.536483 systemd[1]: Started sshd@5-10.200.8.40:22-10.200.16.10:34776.service - OpenSSH per-connection server daemon (10.200.16.10:34776). Nov 6 00:23:28.165513 sshd[2122]: Accepted publickey for core from 10.200.16.10 port 34776 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:28.167246 sshd-session[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:28.171817 systemd-logind[1676]: New session 8 of user core. Nov 6 00:23:28.178111 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:23:28.509421 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:23:28.509819 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:28.515523 sudo[2127]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:28.519346 sudo[2126]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:23:28.519560 sudo[2126]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:28.527168 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:23:28.556537 augenrules[2149]: No rules Nov 6 00:23:28.557512 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:23:28.557726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:23:28.558441 sudo[2126]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:28.666040 sshd[2125]: Connection closed by 10.200.16.10 port 34776 Nov 6 00:23:28.666484 sshd-session[2122]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:28.669666 systemd[1]: sshd@5-10.200.8.40:22-10.200.16.10:34776.service: Deactivated successfully. Nov 6 00:23:28.671159 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:23:28.671779 systemd-logind[1676]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:23:28.672889 systemd-logind[1676]: Removed session 8. Nov 6 00:23:28.776676 systemd[1]: Started sshd@6-10.200.8.40:22-10.200.16.10:34784.service - OpenSSH per-connection server daemon (10.200.16.10:34784). Nov 6 00:23:29.405487 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 34784 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:29.406604 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:29.410843 systemd-logind[1676]: New session 9 of user core. Nov 6 00:23:29.418100 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:23:29.749603 sudo[2162]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:23:29.749819 sudo[2162]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:29.952846 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 6 00:23:31.495680 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:23:31.505296 (dockerd)[2180]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:23:32.576567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 00:23:32.578213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:32.913071 dockerd[2180]: time="2025-11-06T00:23:32.912940585Z" level=info msg="Starting up" Nov 6 00:23:32.917973 dockerd[2180]: time="2025-11-06T00:23:32.917612940Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:23:32.927404 dockerd[2180]: time="2025-11-06T00:23:32.927363924Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:23:33.244471 systemd[1]: var-lib-docker-metacopy\x2dcheck1612925509-merged.mount: Deactivated successfully. Nov 6 00:23:33.275469 dockerd[2180]: time="2025-11-06T00:23:33.275426316Z" level=info msg="Loading containers: start." Nov 6 00:23:33.297025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:33.303170 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:33.335225 kubelet[2208]: E1106 00:23:33.335185 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:33.336711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:33.336836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:33.337144 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.1M memory peak. Nov 6 00:23:33.383979 kernel: Initializing XFRM netlink socket Nov 6 00:23:33.663796 update_engine[1677]: I20251106 00:23:33.663267 1677 update_attempter.cc:509] Updating boot flags... Nov 6 00:23:33.901841 systemd-networkd[1447]: docker0: Link UP Nov 6 00:23:33.914555 dockerd[2180]: time="2025-11-06T00:23:33.914364361Z" level=info msg="Loading containers: done." Nov 6 00:23:33.962653 dockerd[2180]: time="2025-11-06T00:23:33.962611492Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:23:33.962791 dockerd[2180]: time="2025-11-06T00:23:33.962700990Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:23:33.962791 dockerd[2180]: time="2025-11-06T00:23:33.962786108Z" level=info msg="Initializing buildkit" Nov 6 00:23:34.008475 dockerd[2180]: time="2025-11-06T00:23:34.008421465Z" level=info msg="Completed buildkit initialization" Nov 6 00:23:34.015260 dockerd[2180]: time="2025-11-06T00:23:34.015218049Z" level=info msg="Daemon has completed initialization" Nov 6 00:23:34.015991 dockerd[2180]: time="2025-11-06T00:23:34.015379156Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:23:34.015453 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:23:34.934141 containerd[1693]: time="2025-11-06T00:23:34.934106136Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 00:23:35.621270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704120930.mount: Deactivated successfully. Nov 6 00:23:36.863787 containerd[1693]: time="2025-11-06T00:23:36.863733780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:36.866554 containerd[1693]: time="2025-11-06T00:23:36.866518414Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Nov 6 00:23:36.869377 containerd[1693]: time="2025-11-06T00:23:36.869339084Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:36.872767 containerd[1693]: time="2025-11-06T00:23:36.872523063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:36.873209 containerd[1693]: time="2025-11-06T00:23:36.873187812Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.939047732s" Nov 6 00:23:36.873258 containerd[1693]: time="2025-11-06T00:23:36.873222376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 00:23:36.873728 containerd[1693]: time="2025-11-06T00:23:36.873702779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 00:23:38.285925 containerd[1693]: time="2025-11-06T00:23:38.285876252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:38.287884 containerd[1693]: time="2025-11-06T00:23:38.287847072Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Nov 6 00:23:38.290256 containerd[1693]: time="2025-11-06T00:23:38.290220281Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:38.293870 containerd[1693]: time="2025-11-06T00:23:38.293828497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:38.294565 containerd[1693]: time="2025-11-06T00:23:38.294414851Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.420685242s" Nov 6 00:23:38.294565 containerd[1693]: time="2025-11-06T00:23:38.294447158Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 00:23:38.295074 containerd[1693]: time="2025-11-06T00:23:38.295046585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 00:23:39.725733 containerd[1693]: time="2025-11-06T00:23:39.725683002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:39.727773 containerd[1693]: time="2025-11-06T00:23:39.727744008Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Nov 6 00:23:39.730223 containerd[1693]: time="2025-11-06T00:23:39.730173009Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:39.734822 containerd[1693]: time="2025-11-06T00:23:39.734782660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:39.735538 containerd[1693]: time="2025-11-06T00:23:39.735510280Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.440439722s" Nov 6 00:23:39.735538 containerd[1693]: time="2025-11-06T00:23:39.735536965Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 00:23:39.736300 containerd[1693]: time="2025-11-06T00:23:39.736271459Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 00:23:40.796000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020476519.mount: Deactivated successfully. Nov 6 00:23:41.061663 containerd[1693]: time="2025-11-06T00:23:41.061563547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:41.063579 containerd[1693]: time="2025-11-06T00:23:41.063546761Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Nov 6 00:23:41.068611 containerd[1693]: time="2025-11-06T00:23:41.067840243Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:41.070839 containerd[1693]: time="2025-11-06T00:23:41.070812320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:41.071324 containerd[1693]: time="2025-11-06T00:23:41.071298235Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.334992646s" Nov 6 00:23:41.071404 containerd[1693]: time="2025-11-06T00:23:41.071391930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 00:23:41.071883 containerd[1693]: time="2025-11-06T00:23:41.071855386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 00:23:41.657386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708290342.mount: Deactivated successfully. Nov 6 00:23:42.725006 containerd[1693]: time="2025-11-06T00:23:42.724945403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:42.726903 containerd[1693]: time="2025-11-06T00:23:42.726867751Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Nov 6 00:23:42.729298 containerd[1693]: time="2025-11-06T00:23:42.729258232Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:42.732653 containerd[1693]: time="2025-11-06T00:23:42.732611855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:42.733490 containerd[1693]: time="2025-11-06T00:23:42.733301269Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.661419397s" Nov 6 00:23:42.733490 containerd[1693]: time="2025-11-06T00:23:42.733332551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 00:23:42.733948 containerd[1693]: time="2025-11-06T00:23:42.733926249Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 00:23:43.243516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183099931.mount: Deactivated successfully. Nov 6 00:23:43.263307 containerd[1693]: time="2025-11-06T00:23:43.263266141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:43.266938 containerd[1693]: time="2025-11-06T00:23:43.266911838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Nov 6 00:23:43.276658 containerd[1693]: time="2025-11-06T00:23:43.276618519Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:43.280432 containerd[1693]: time="2025-11-06T00:23:43.280392549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:43.280860 containerd[1693]: time="2025-11-06T00:23:43.280838795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 546.885464ms" Nov 6 00:23:43.280936 containerd[1693]: time="2025-11-06T00:23:43.280925059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 00:23:43.281570 containerd[1693]: time="2025-11-06T00:23:43.281397076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 00:23:43.576654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 6 00:23:43.578014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:44.090864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:44.103177 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:44.140139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:44.150526 kubelet[2562]: E1106 00:23:44.138779 2562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:44.140246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:44.140572 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.4M memory peak. Nov 6 00:23:46.922064 containerd[1693]: time="2025-11-06T00:23:46.922015824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:46.924507 containerd[1693]: time="2025-11-06T00:23:46.924470471Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Nov 6 00:23:46.926834 containerd[1693]: time="2025-11-06T00:23:46.926795082Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:46.933069 containerd[1693]: time="2025-11-06T00:23:46.933017135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:46.933928 containerd[1693]: time="2025-11-06T00:23:46.933900452Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.65247716s" Nov 6 00:23:46.933991 containerd[1693]: time="2025-11-06T00:23:46.933928516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 00:23:49.563147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:49.563309 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.4M memory peak. Nov 6 00:23:49.565400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:49.592416 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-9.scope)... Nov 6 00:23:49.592433 systemd[1]: Reloading... Nov 6 00:23:49.681983 zram_generator::config[2685]: No configuration found. Nov 6 00:23:49.873982 systemd[1]: Reloading finished in 280 ms. Nov 6 00:23:50.000535 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:23:50.000622 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:23:50.000918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:50.000991 systemd[1]: kubelet.service: Consumed 83ms CPU time, 87.5M memory peak. Nov 6 00:23:50.002896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:50.810002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:50.816245 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:23:50.854588 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:23:50.854588 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:23:50.855650 kubelet[2752]: I1106 00:23:50.854639 2752 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:23:51.265254 kubelet[2752]: I1106 00:23:51.265226 2752 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:23:51.265254 kubelet[2752]: I1106 00:23:51.265248 2752 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:23:51.265386 kubelet[2752]: I1106 00:23:51.265269 2752 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:23:51.265386 kubelet[2752]: I1106 00:23:51.265278 2752 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:23:51.265493 kubelet[2752]: I1106 00:23:51.265480 2752 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:23:51.501220 kubelet[2752]: E1106 00:23:51.501144 2752 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:23:51.501969 kubelet[2752]: I1106 00:23:51.501682 2752 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:23:51.506616 kubelet[2752]: I1106 00:23:51.506592 2752 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:23:51.509160 kubelet[2752]: I1106 00:23:51.509139 2752 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:23:51.510006 kubelet[2752]: I1106 00:23:51.509949 2752 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:23:51.510162 kubelet[2752]: I1106 00:23:51.510006 2752 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-288d7afe99","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:23:51.510288 kubelet[2752]: I1106 00:23:51.510168 2752 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:23:51.510288 kubelet[2752]: I1106 00:23:51.510177 2752 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:23:51.510288 kubelet[2752]: I1106 00:23:51.510261 2752 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:23:51.514369 kubelet[2752]: I1106 00:23:51.514354 2752 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:51.514506 kubelet[2752]: I1106 00:23:51.514490 2752 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:23:51.514506 kubelet[2752]: I1106 00:23:51.514505 2752 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:23:51.514562 kubelet[2752]: I1106 00:23:51.514523 2752 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:23:51.514562 kubelet[2752]: I1106 00:23:51.514545 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:23:51.520391 kubelet[2752]: E1106 00:23:51.519535 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-288d7afe99&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:23:51.520391 kubelet[2752]: E1106 00:23:51.519852 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:23:51.520620 kubelet[2752]: I1106 00:23:51.520592 2752 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:23:51.521061 kubelet[2752]: I1106 00:23:51.521035 2752 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:23:51.521108 kubelet[2752]: I1106 00:23:51.521067 2752 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:23:51.521139 kubelet[2752]: W1106 00:23:51.521111 2752 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:23:51.524422 kubelet[2752]: I1106 00:23:51.524299 2752 server.go:1262] "Started kubelet" Nov 6 00:23:51.525492 kubelet[2752]: I1106 00:23:51.525276 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:23:51.529493 kubelet[2752]: E1106 00:23:51.527831 2752 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-288d7afe99.1875431f7230d8b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-288d7afe99,UID:ci-4459.1.0-n-288d7afe99,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-288d7afe99,},FirstTimestamp:2025-11-06 00:23:51.52426821 +0000 UTC m=+0.704338754,LastTimestamp:2025-11-06 00:23:51.52426821 +0000 UTC m=+0.704338754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-288d7afe99,}" Nov 6 00:23:51.531227 kubelet[2752]: E1106 00:23:51.531196 2752 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:23:51.531408 kubelet[2752]: I1106 00:23:51.531380 2752 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:23:51.532311 kubelet[2752]: I1106 00:23:51.532295 2752 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:23:51.532630 kubelet[2752]: I1106 00:23:51.532614 2752 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:23:51.532808 kubelet[2752]: E1106 00:23:51.532792 2752 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-288d7afe99\" not found" Nov 6 00:23:51.535609 kubelet[2752]: I1106 00:23:51.535587 2752 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:23:51.535674 kubelet[2752]: I1106 00:23:51.535626 2752 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:23:51.536720 kubelet[2752]: I1106 00:23:51.536210 2752 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:23:51.536720 kubelet[2752]: I1106 00:23:51.536256 2752 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:23:51.536720 kubelet[2752]: I1106 00:23:51.536395 2752 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:23:51.536720 kubelet[2752]: I1106 00:23:51.536605 2752 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:23:51.538590 kubelet[2752]: E1106 00:23:51.538547 2752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-288d7afe99?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="200ms" Nov 6 00:23:51.538868 kubelet[2752]: I1106 00:23:51.538851 2752 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:23:51.538947 kubelet[2752]: I1106 00:23:51.538931 2752 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:23:51.539845 kubelet[2752]: I1106 00:23:51.539828 2752 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:23:51.542871 kubelet[2752]: E1106 00:23:51.542623 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:23:51.566885 kubelet[2752]: I1106 00:23:51.566865 2752 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:23:51.566992 kubelet[2752]: I1106 00:23:51.566984 2752 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:23:51.567480 kubelet[2752]: I1106 00:23:51.567312 2752 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:51.591519 kubelet[2752]: I1106 00:23:51.591507 2752 policy_none.go:49] "None policy: Start" Nov 6 00:23:51.591586 kubelet[2752]: I1106 00:23:51.591572 2752 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:23:51.591612 kubelet[2752]: I1106 00:23:51.591589 2752 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:23:51.594993 kubelet[2752]: I1106 00:23:51.594980 2752 policy_none.go:47] "Start" Nov 6 00:23:51.598157 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:23:51.607987 kubelet[2752]: I1106 00:23:51.607948 2752 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:23:51.608802 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:23:51.610379 kubelet[2752]: I1106 00:23:51.610353 2752 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:23:51.610379 kubelet[2752]: I1106 00:23:51.610373 2752 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:23:51.610465 kubelet[2752]: I1106 00:23:51.610391 2752 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:23:51.610465 kubelet[2752]: E1106 00:23:51.610422 2752 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:23:51.613679 kubelet[2752]: E1106 00:23:51.613639 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:23:51.625144 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:23:51.626737 kubelet[2752]: E1106 00:23:51.626296 2752 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:23:51.626737 kubelet[2752]: I1106 00:23:51.626441 2752 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:23:51.626737 kubelet[2752]: I1106 00:23:51.626450 2752 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:23:51.627399 kubelet[2752]: I1106 00:23:51.627351 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:23:51.628169 kubelet[2752]: E1106 00:23:51.628155 2752 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:23:51.628273 kubelet[2752]: E1106 00:23:51.628265 2752 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-288d7afe99\" not found" Nov 6 00:23:51.721581 systemd[1]: Created slice kubepods-burstable-pod0e691f01dd3596ddcee9303291331c66.slice - libcontainer container kubepods-burstable-pod0e691f01dd3596ddcee9303291331c66.slice. Nov 6 00:23:51.728355 kubelet[2752]: I1106 00:23:51.728317 2752 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.728599 kubelet[2752]: E1106 00:23:51.728580 2752 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.729607 kubelet[2752]: E1106 00:23:51.729576 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.733400 systemd[1]: Created slice kubepods-burstable-pod9c00d0b5203aca67dbf1cca838e9f4ea.slice - libcontainer container kubepods-burstable-pod9c00d0b5203aca67dbf1cca838e9f4ea.slice. Nov 6 00:23:51.736023 kubelet[2752]: E1106 00:23:51.735849 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737164 kubelet[2752]: I1106 00:23:51.737145 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e691f01dd3596ddcee9303291331c66-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" (UID: \"0e691f01dd3596ddcee9303291331c66\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737277 kubelet[2752]: I1106 00:23:51.737266 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e691f01dd3596ddcee9303291331c66-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" (UID: \"0e691f01dd3596ddcee9303291331c66\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737352 kubelet[2752]: I1106 00:23:51.737341 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e691f01dd3596ddcee9303291331c66-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" (UID: \"0e691f01dd3596ddcee9303291331c66\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737430 kubelet[2752]: I1106 00:23:51.737421 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737494 kubelet[2752]: I1106 00:23:51.737475 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737544 kubelet[2752]: I1106 00:23:51.737535 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f4bc110e70d7c35172efb9abbea7e42-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-288d7afe99\" (UID: \"4f4bc110e70d7c35172efb9abbea7e42\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737606 kubelet[2752]: I1106 00:23:51.737597 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737737 kubelet[2752]: I1106 00:23:51.737650 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.737737 kubelet[2752]: I1106 00:23:51.737682 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.738074 systemd[1]: Created slice kubepods-burstable-pod4f4bc110e70d7c35172efb9abbea7e42.slice - libcontainer container kubepods-burstable-pod4f4bc110e70d7c35172efb9abbea7e42.slice. Nov 6 00:23:51.739131 kubelet[2752]: E1106 00:23:51.738824 2752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-288d7afe99?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="400ms" Nov 6 00:23:51.739352 kubelet[2752]: E1106 00:23:51.739334 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.930705 kubelet[2752]: I1106 00:23:51.930678 2752 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:51.931121 kubelet[2752]: E1106 00:23:51.930947 2752 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:52.034443 containerd[1693]: time="2025-11-06T00:23:52.034402989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-288d7afe99,Uid:0e691f01dd3596ddcee9303291331c66,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:52.041458 containerd[1693]: time="2025-11-06T00:23:52.041427931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-288d7afe99,Uid:9c00d0b5203aca67dbf1cca838e9f4ea,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:52.045398 containerd[1693]: time="2025-11-06T00:23:52.045220919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-288d7afe99,Uid:4f4bc110e70d7c35172efb9abbea7e42,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:52.139448 kubelet[2752]: E1106 00:23:52.139413 2752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-288d7afe99?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="800ms" Nov 6 00:23:52.333107 kubelet[2752]: I1106 00:23:52.333082 2752 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:52.333443 kubelet[2752]: E1106 00:23:52.333422 2752 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:52.390301 kubelet[2752]: E1106 00:23:52.390276 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-288d7afe99&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:23:52.595227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223782766.mount: Deactivated successfully. Nov 6 00:23:52.611621 containerd[1693]: time="2025-11-06T00:23:52.611586685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:52.619266 containerd[1693]: time="2025-11-06T00:23:52.619159338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 6 00:23:52.621485 containerd[1693]: time="2025-11-06T00:23:52.621458636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:52.623743 containerd[1693]: time="2025-11-06T00:23:52.623715181Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:52.627975 containerd[1693]: time="2025-11-06T00:23:52.627653696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:23:52.630062 containerd[1693]: time="2025-11-06T00:23:52.630020944Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:52.632312 containerd[1693]: time="2025-11-06T00:23:52.632284565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:52.632780 containerd[1693]: time="2025-11-06T00:23:52.632757745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.806499ms" Nov 6 00:23:52.634280 containerd[1693]: time="2025-11-06T00:23:52.634253557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:23:52.637083 containerd[1693]: time="2025-11-06T00:23:52.637059253Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 581.134725ms" Nov 6 00:23:52.664482 containerd[1693]: time="2025-11-06T00:23:52.664379285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 610.487432ms" Nov 6 00:23:52.665155 containerd[1693]: time="2025-11-06T00:23:52.665122972Z" level=info msg="connecting to shim fa7a183b7f8b85791b492f60dc3151374d4ef0f865f4112fd1ddb7944aec84a5" address="unix:///run/containerd/s/2125e07e90f7f526a3c2858b81649f9fb0166641cfda07e77eda80f61c8bf4da" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:52.674990 kubelet[2752]: E1106 00:23:52.673668 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:23:52.682410 containerd[1693]: time="2025-11-06T00:23:52.682377938Z" level=info msg="connecting to shim 917b7b1e294a84ee8c9659f6da428d6ba18d9b3ebc9b99e5208c3bbd79689b06" address="unix:///run/containerd/s/46973af0511d33e6b3e9da06276a7daf0a6b52104d846dfffe9d07e387b531e1" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:52.701323 systemd[1]: Started cri-containerd-fa7a183b7f8b85791b492f60dc3151374d4ef0f865f4112fd1ddb7944aec84a5.scope - libcontainer container fa7a183b7f8b85791b492f60dc3151374d4ef0f865f4112fd1ddb7944aec84a5. Nov 6 00:23:52.711616 containerd[1693]: time="2025-11-06T00:23:52.711585226Z" level=info msg="connecting to shim 02173ced8961d3b921bcba462bf58e4e073c58abceca05e9e5e2fa1522e15ff3" address="unix:///run/containerd/s/f8926597210d75b7882571eb8f9af6c4576bd98ae76a4a4ed4b072d5efcdbb9e" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:52.718227 systemd[1]: Started cri-containerd-917b7b1e294a84ee8c9659f6da428d6ba18d9b3ebc9b99e5208c3bbd79689b06.scope - libcontainer container 917b7b1e294a84ee8c9659f6da428d6ba18d9b3ebc9b99e5208c3bbd79689b06. Nov 6 00:23:52.735243 kubelet[2752]: E1106 00:23:52.735217 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:23:52.743358 systemd[1]: Started cri-containerd-02173ced8961d3b921bcba462bf58e4e073c58abceca05e9e5e2fa1522e15ff3.scope - libcontainer container 02173ced8961d3b921bcba462bf58e4e073c58abceca05e9e5e2fa1522e15ff3. Nov 6 00:23:52.777430 containerd[1693]: time="2025-11-06T00:23:52.777291483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-288d7afe99,Uid:0e691f01dd3596ddcee9303291331c66,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa7a183b7f8b85791b492f60dc3151374d4ef0f865f4112fd1ddb7944aec84a5\"" Nov 6 00:23:52.785741 containerd[1693]: time="2025-11-06T00:23:52.785702651Z" level=info msg="CreateContainer within sandbox \"fa7a183b7f8b85791b492f60dc3151374d4ef0f865f4112fd1ddb7944aec84a5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:23:52.810305 containerd[1693]: time="2025-11-06T00:23:52.810286048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-288d7afe99,Uid:9c00d0b5203aca67dbf1cca838e9f4ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"917b7b1e294a84ee8c9659f6da428d6ba18d9b3ebc9b99e5208c3bbd79689b06\"" Nov 6 00:23:52.817417 containerd[1693]: time="2025-11-06T00:23:52.816916699Z" level=info msg="Container 029f57f2a8551710442a70f5c427b41a90635f75c4e33af4c3bc2217b8f8ba9d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:52.818299 containerd[1693]: time="2025-11-06T00:23:52.818283310Z" level=info msg="CreateContainer within sandbox \"917b7b1e294a84ee8c9659f6da428d6ba18d9b3ebc9b99e5208c3bbd79689b06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:23:52.819806 containerd[1693]: time="2025-11-06T00:23:52.819787965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-288d7afe99,Uid:4f4bc110e70d7c35172efb9abbea7e42,Namespace:kube-system,Attempt:0,} returns sandbox id \"02173ced8961d3b921bcba462bf58e4e073c58abceca05e9e5e2fa1522e15ff3\"" Nov 6 00:23:52.832323 containerd[1693]: time="2025-11-06T00:23:52.832305000Z" level=info msg="CreateContainer within sandbox \"02173ced8961d3b921bcba462bf58e4e073c58abceca05e9e5e2fa1522e15ff3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:23:52.838151 containerd[1693]: time="2025-11-06T00:23:52.838128463Z" level=info msg="CreateContainer within sandbox \"fa7a183b7f8b85791b492f60dc3151374d4ef0f865f4112fd1ddb7944aec84a5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"029f57f2a8551710442a70f5c427b41a90635f75c4e33af4c3bc2217b8f8ba9d\"" Nov 6 00:23:52.838555 containerd[1693]: time="2025-11-06T00:23:52.838540440Z" level=info msg="StartContainer for \"029f57f2a8551710442a70f5c427b41a90635f75c4e33af4c3bc2217b8f8ba9d\"" Nov 6 00:23:52.839474 containerd[1693]: time="2025-11-06T00:23:52.839450789Z" level=info msg="connecting to shim 029f57f2a8551710442a70f5c427b41a90635f75c4e33af4c3bc2217b8f8ba9d" address="unix:///run/containerd/s/2125e07e90f7f526a3c2858b81649f9fb0166641cfda07e77eda80f61c8bf4da" protocol=ttrpc version=3 Nov 6 00:23:52.857085 systemd[1]: Started cri-containerd-029f57f2a8551710442a70f5c427b41a90635f75c4e33af4c3bc2217b8f8ba9d.scope - libcontainer container 029f57f2a8551710442a70f5c427b41a90635f75c4e33af4c3bc2217b8f8ba9d. Nov 6 00:23:52.866862 containerd[1693]: time="2025-11-06T00:23:52.866836214Z" level=info msg="Container 629d5876d7fee84d108f5e570c1f28338f6926b90103d85eeabab1b0e3594024: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:52.872864 containerd[1693]: time="2025-11-06T00:23:52.872839571Z" level=info msg="Container d844f590a4b8c8371490ddc34e86f6660a7ca6460c03ad6098d9382cfe54bb1d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:52.876711 kubelet[2752]: E1106 00:23:52.876480 2752 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:23:52.883470 containerd[1693]: time="2025-11-06T00:23:52.883448755Z" level=info msg="CreateContainer within sandbox \"917b7b1e294a84ee8c9659f6da428d6ba18d9b3ebc9b99e5208c3bbd79689b06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"629d5876d7fee84d108f5e570c1f28338f6926b90103d85eeabab1b0e3594024\"" Nov 6 00:23:52.883881 containerd[1693]: time="2025-11-06T00:23:52.883840209Z" level=info msg="StartContainer for \"629d5876d7fee84d108f5e570c1f28338f6926b90103d85eeabab1b0e3594024\"" Nov 6 00:23:52.885225 containerd[1693]: time="2025-11-06T00:23:52.885181843Z" level=info msg="connecting to shim 629d5876d7fee84d108f5e570c1f28338f6926b90103d85eeabab1b0e3594024" address="unix:///run/containerd/s/46973af0511d33e6b3e9da06276a7daf0a6b52104d846dfffe9d07e387b531e1" protocol=ttrpc version=3 Nov 6 00:23:52.894530 containerd[1693]: time="2025-11-06T00:23:52.894114343Z" level=info msg="CreateContainer within sandbox \"02173ced8961d3b921bcba462bf58e4e073c58abceca05e9e5e2fa1522e15ff3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d844f590a4b8c8371490ddc34e86f6660a7ca6460c03ad6098d9382cfe54bb1d\"" Nov 6 00:23:52.895277 containerd[1693]: time="2025-11-06T00:23:52.895255243Z" level=info msg="StartContainer for \"d844f590a4b8c8371490ddc34e86f6660a7ca6460c03ad6098d9382cfe54bb1d\"" Nov 6 00:23:52.897870 containerd[1693]: time="2025-11-06T00:23:52.897841981Z" level=info msg="connecting to shim d844f590a4b8c8371490ddc34e86f6660a7ca6460c03ad6098d9382cfe54bb1d" address="unix:///run/containerd/s/f8926597210d75b7882571eb8f9af6c4576bd98ae76a4a4ed4b072d5efcdbb9e" protocol=ttrpc version=3 Nov 6 00:23:52.904221 systemd[1]: Started cri-containerd-629d5876d7fee84d108f5e570c1f28338f6926b90103d85eeabab1b0e3594024.scope - libcontainer container 629d5876d7fee84d108f5e570c1f28338f6926b90103d85eeabab1b0e3594024. Nov 6 00:23:52.925585 containerd[1693]: time="2025-11-06T00:23:52.925563139Z" level=info msg="StartContainer for \"029f57f2a8551710442a70f5c427b41a90635f75c4e33af4c3bc2217b8f8ba9d\" returns successfully" Nov 6 00:23:52.928118 systemd[1]: Started cri-containerd-d844f590a4b8c8371490ddc34e86f6660a7ca6460c03ad6098d9382cfe54bb1d.scope - libcontainer container d844f590a4b8c8371490ddc34e86f6660a7ca6460c03ad6098d9382cfe54bb1d. Nov 6 00:23:52.940075 kubelet[2752]: E1106 00:23:52.940049 2752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-288d7afe99?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="1.6s" Nov 6 00:23:52.985234 containerd[1693]: time="2025-11-06T00:23:52.985207494Z" level=info msg="StartContainer for \"629d5876d7fee84d108f5e570c1f28338f6926b90103d85eeabab1b0e3594024\" returns successfully" Nov 6 00:23:53.016065 containerd[1693]: time="2025-11-06T00:23:53.016012694Z" level=info msg="StartContainer for \"d844f590a4b8c8371490ddc34e86f6660a7ca6460c03ad6098d9382cfe54bb1d\" returns successfully" Nov 6 00:23:53.136207 kubelet[2752]: I1106 00:23:53.136142 2752 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:53.623547 kubelet[2752]: E1106 00:23:53.623516 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:53.634647 kubelet[2752]: E1106 00:23:53.634626 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:53.644330 kubelet[2752]: E1106 00:23:53.644309 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.643151 kubelet[2752]: E1106 00:23:54.643122 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.643468 kubelet[2752]: E1106 00:23:54.643443 2752 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.719337 kubelet[2752]: E1106 00:23:54.719302 2752 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-n-288d7afe99\" not found" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.834723 kubelet[2752]: I1106 00:23:54.834164 2752 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.834723 kubelet[2752]: E1106 00:23:54.834192 2752 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-n-288d7afe99\": node \"ci-4459.1.0-n-288d7afe99\" not found" Nov 6 00:23:54.934314 kubelet[2752]: I1106 00:23:54.934227 2752 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.988809 kubelet[2752]: E1106 00:23:54.988778 2752 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.988809 kubelet[2752]: I1106 00:23:54.988807 2752 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.990398 kubelet[2752]: E1106 00:23:54.990372 2752 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.990398 kubelet[2752]: I1106 00:23:54.990397 2752 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:54.991802 kubelet[2752]: E1106 00:23:54.991763 2752 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-288d7afe99\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:55.521873 kubelet[2752]: I1106 00:23:55.521845 2752 apiserver.go:52] "Watching apiserver" Nov 6 00:23:55.536523 kubelet[2752]: I1106 00:23:55.536492 2752 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:23:55.642427 kubelet[2752]: I1106 00:23:55.642401 2752 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:55.651049 kubelet[2752]: I1106 00:23:55.650991 2752 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:23:55.763927 kubelet[2752]: I1106 00:23:55.763904 2752 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:55.769362 kubelet[2752]: I1106 00:23:55.769320 2752 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:23:56.772457 systemd[1]: Reload requested from client PID 3038 ('systemctl') (unit session-9.scope)... Nov 6 00:23:56.772472 systemd[1]: Reloading... Nov 6 00:23:56.864995 zram_generator::config[3094]: No configuration found. Nov 6 00:23:57.051242 systemd[1]: Reloading finished in 278 ms. Nov 6 00:23:57.073176 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:57.084377 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:23:57.084585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:57.084627 systemd[1]: kubelet.service: Consumed 759ms CPU time, 124.8M memory peak. Nov 6 00:23:57.086514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:58.482981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:58.490274 (kubelet)[3152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:23:58.535984 kubelet[3152]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:23:58.535984 kubelet[3152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:23:58.535984 kubelet[3152]: I1106 00:23:58.535717 3152 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:23:58.540818 kubelet[3152]: I1106 00:23:58.540792 3152 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:23:58.540818 kubelet[3152]: I1106 00:23:58.540810 3152 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:23:58.540935 kubelet[3152]: I1106 00:23:58.540831 3152 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:23:58.540935 kubelet[3152]: I1106 00:23:58.540838 3152 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:23:58.541170 kubelet[3152]: I1106 00:23:58.541155 3152 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:23:58.542533 kubelet[3152]: I1106 00:23:58.542514 3152 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:23:58.545660 kubelet[3152]: I1106 00:23:58.544974 3152 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:23:58.551193 kubelet[3152]: I1106 00:23:58.551177 3152 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:23:58.553551 kubelet[3152]: I1106 00:23:58.553527 3152 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:23:58.553697 kubelet[3152]: I1106 00:23:58.553672 3152 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:23:58.553816 kubelet[3152]: I1106 00:23:58.553696 3152 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-288d7afe99","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:23:58.553912 kubelet[3152]: I1106 00:23:58.553822 3152 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:23:58.553912 kubelet[3152]: I1106 00:23:58.553832 3152 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:23:58.553912 kubelet[3152]: I1106 00:23:58.553853 3152 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:23:58.554550 kubelet[3152]: I1106 00:23:58.554538 3152 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:58.554658 kubelet[3152]: I1106 00:23:58.554652 3152 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:23:58.554688 kubelet[3152]: I1106 00:23:58.554664 3152 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:23:58.554688 kubelet[3152]: I1106 00:23:58.554682 3152 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:23:58.554750 kubelet[3152]: I1106 00:23:58.554699 3152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:23:58.558287 kubelet[3152]: I1106 00:23:58.556393 3152 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:23:58.558287 kubelet[3152]: I1106 00:23:58.557055 3152 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:23:58.558287 kubelet[3152]: I1106 00:23:58.557085 3152 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:23:58.559979 kubelet[3152]: I1106 00:23:58.559320 3152 server.go:1262] "Started kubelet" Nov 6 00:23:58.561972 kubelet[3152]: I1106 00:23:58.560694 3152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:23:58.568313 kubelet[3152]: I1106 00:23:58.568285 3152 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:23:58.569615 kubelet[3152]: I1106 00:23:58.569153 3152 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:23:58.575373 kubelet[3152]: I1106 00:23:58.575343 3152 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:23:58.575440 kubelet[3152]: I1106 00:23:58.575389 3152 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:23:58.575524 kubelet[3152]: I1106 00:23:58.575510 3152 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:23:58.575720 kubelet[3152]: I1106 00:23:58.575705 3152 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:23:58.577826 kubelet[3152]: E1106 00:23:58.576765 3152 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-288d7afe99\" not found" Nov 6 00:23:58.577826 kubelet[3152]: I1106 00:23:58.576809 3152 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:23:58.577826 kubelet[3152]: I1106 00:23:58.576948 3152 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:23:58.577826 kubelet[3152]: I1106 00:23:58.577048 3152 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:23:58.581514 kubelet[3152]: E1106 00:23:58.581488 3152 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:23:58.583119 kubelet[3152]: I1106 00:23:58.583096 3152 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:23:58.583119 kubelet[3152]: I1106 00:23:58.583118 3152 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:23:58.583213 kubelet[3152]: I1106 00:23:58.583190 3152 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:23:58.598856 kubelet[3152]: I1106 00:23:58.598801 3152 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:23:58.599731 kubelet[3152]: I1106 00:23:58.599709 3152 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:23:58.599731 kubelet[3152]: I1106 00:23:58.599726 3152 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:23:58.599822 kubelet[3152]: I1106 00:23:58.599743 3152 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:23:58.599822 kubelet[3152]: E1106 00:23:58.599777 3152 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:23:58.629294 kubelet[3152]: I1106 00:23:58.629085 3152 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:23:58.629294 kubelet[3152]: I1106 00:23:58.629097 3152 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:23:58.629294 kubelet[3152]: I1106 00:23:58.629115 3152 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:58.629781 kubelet[3152]: I1106 00:23:58.629279 3152 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:23:58.629841 kubelet[3152]: I1106 00:23:58.629783 3152 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:23:58.629841 kubelet[3152]: I1106 00:23:58.629805 3152 policy_none.go:49] "None policy: Start" Nov 6 00:23:58.629841 kubelet[3152]: I1106 00:23:58.629817 3152 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:23:58.629841 kubelet[3152]: I1106 00:23:58.629831 3152 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:23:58.630044 kubelet[3152]: I1106 00:23:58.630030 3152 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 00:23:58.630044 kubelet[3152]: I1106 00:23:58.630043 3152 policy_none.go:47] "Start" Nov 6 00:23:58.633017 kubelet[3152]: E1106 00:23:58.633003 3152 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:23:58.633246 kubelet[3152]: I1106 00:23:58.633236 3152 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:23:58.633330 kubelet[3152]: I1106 00:23:58.633309 3152 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:23:58.633653 kubelet[3152]: I1106 00:23:58.633643 3152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:23:58.635364 kubelet[3152]: E1106 00:23:58.635344 3152 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:23:58.676901 sudo[3189]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 00:23:58.677160 sudo[3189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 00:23:58.700865 kubelet[3152]: I1106 00:23:58.700831 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.703165 kubelet[3152]: I1106 00:23:58.703145 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.704866 kubelet[3152]: I1106 00:23:58.703726 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.715283 kubelet[3152]: I1106 00:23:58.715266 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:23:58.715662 kubelet[3152]: E1106 00:23:58.715466 3152 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-288d7afe99\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.715662 kubelet[3152]: I1106 00:23:58.715415 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:23:58.715662 kubelet[3152]: I1106 00:23:58.715375 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:23:58.715662 kubelet[3152]: E1106 00:23:58.715539 3152 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.736632 kubelet[3152]: I1106 00:23:58.736574 3152 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.758576 kubelet[3152]: I1106 00:23:58.758501 3152 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.758576 kubelet[3152]: I1106 00:23:58.758552 3152 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777807 kubelet[3152]: I1106 00:23:58.777321 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777807 kubelet[3152]: I1106 00:23:58.777347 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777807 kubelet[3152]: I1106 00:23:58.777366 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777807 kubelet[3152]: I1106 00:23:58.777384 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f4bc110e70d7c35172efb9abbea7e42-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-288d7afe99\" (UID: \"4f4bc110e70d7c35172efb9abbea7e42\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777807 kubelet[3152]: I1106 00:23:58.777401 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e691f01dd3596ddcee9303291331c66-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" (UID: \"0e691f01dd3596ddcee9303291331c66\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777990 kubelet[3152]: I1106 00:23:58.777416 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e691f01dd3596ddcee9303291331c66-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" (UID: \"0e691f01dd3596ddcee9303291331c66\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777990 kubelet[3152]: I1106 00:23:58.777432 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e691f01dd3596ddcee9303291331c66-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" (UID: \"0e691f01dd3596ddcee9303291331c66\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777990 kubelet[3152]: I1106 00:23:58.777448 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:58.777990 kubelet[3152]: I1106 00:23:58.777464 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c00d0b5203aca67dbf1cca838e9f4ea-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-288d7afe99\" (UID: \"9c00d0b5203aca67dbf1cca838e9f4ea\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:59.044439 sudo[3189]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:59.556212 kubelet[3152]: I1106 00:23:59.556179 3152 apiserver.go:52] "Watching apiserver" Nov 6 00:23:59.577872 kubelet[3152]: I1106 00:23:59.577821 3152 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:23:59.618739 kubelet[3152]: I1106 00:23:59.618694 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:59.626424 kubelet[3152]: I1106 00:23:59.626384 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:23:59.626532 kubelet[3152]: E1106 00:23:59.626437 3152 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-288d7afe99\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" Nov 6 00:23:59.651902 kubelet[3152]: I1106 00:23:59.651691 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-288d7afe99" podStartSLOduration=1.6516767159999999 podStartE2EDuration="1.651676716s" podCreationTimestamp="2025-11-06 00:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:59.640319686 +0000 UTC m=+1.146887506" watchObservedRunningTime="2025-11-06 00:23:59.651676716 +0000 UTC m=+1.158244508" Nov 6 00:23:59.651902 kubelet[3152]: I1106 00:23:59.651814 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-288d7afe99" podStartSLOduration=4.651798603 podStartE2EDuration="4.651798603s" podCreationTimestamp="2025-11-06 00:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:59.651590823 +0000 UTC m=+1.158158624" watchObservedRunningTime="2025-11-06 00:23:59.651798603 +0000 UTC m=+1.158366404" Nov 6 00:23:59.689125 kubelet[3152]: I1106 00:23:59.689072 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-288d7afe99" podStartSLOduration=4.68905849 podStartE2EDuration="4.68905849s" podCreationTimestamp="2025-11-06 00:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:59.689047653 +0000 UTC m=+1.195615455" watchObservedRunningTime="2025-11-06 00:23:59.68905849 +0000 UTC m=+1.195626283" Nov 6 00:24:00.387782 sudo[2162]: pam_unix(sudo:session): session closed for user root Nov 6 00:24:00.488301 sshd[2161]: Connection closed by 10.200.16.10 port 34784 Nov 6 00:24:00.489154 sshd-session[2158]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:00.492209 systemd[1]: sshd@6-10.200.8.40:22-10.200.16.10:34784.service: Deactivated successfully. Nov 6 00:24:00.494300 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:24:00.494543 systemd[1]: session-9.scope: Consumed 3.994s CPU time, 273.1M memory peak. Nov 6 00:24:00.496693 systemd-logind[1676]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:24:00.498205 systemd-logind[1676]: Removed session 9. Nov 6 00:24:04.104178 kubelet[3152]: I1106 00:24:04.104140 3152 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:24:04.104981 containerd[1693]: time="2025-11-06T00:24:04.104749457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:24:04.105384 kubelet[3152]: I1106 00:24:04.105257 3152 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:24:04.908889 systemd[1]: Created slice kubepods-besteffort-pod4d0ad589_9b82_4077_8e60_45d6526bcf5e.slice - libcontainer container kubepods-besteffort-pod4d0ad589_9b82_4077_8e60_45d6526bcf5e.slice. Nov 6 00:24:04.914970 kubelet[3152]: I1106 00:24:04.914893 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d0ad589-9b82-4077-8e60-45d6526bcf5e-lib-modules\") pod \"kube-proxy-47ffq\" (UID: \"4d0ad589-9b82-4077-8e60-45d6526bcf5e\") " pod="kube-system/kube-proxy-47ffq" Nov 6 00:24:04.914970 kubelet[3152]: I1106 00:24:04.914926 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hostproc\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915219 kubelet[3152]: I1106 00:24:04.914949 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hubble-tls\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915219 kubelet[3152]: I1106 00:24:04.915159 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-bpf-maps\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915323 kubelet[3152]: I1106 00:24:04.915175 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cni-path\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915402 kubelet[3152]: I1106 00:24:04.915357 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-xtables-lock\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915488 kubelet[3152]: I1106 00:24:04.915468 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-config-path\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915593 kubelet[3152]: I1106 00:24:04.915537 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-kernel\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915593 kubelet[3152]: I1106 00:24:04.915556 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchq2\" (UniqueName: \"kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-kube-api-access-mchq2\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915593 kubelet[3152]: I1106 00:24:04.915575 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d0ad589-9b82-4077-8e60-45d6526bcf5e-kube-proxy\") pod \"kube-proxy-47ffq\" (UID: \"4d0ad589-9b82-4077-8e60-45d6526bcf5e\") " pod="kube-system/kube-proxy-47ffq" Nov 6 00:24:04.915762 kubelet[3152]: I1106 00:24:04.915713 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d0ad589-9b82-4077-8e60-45d6526bcf5e-xtables-lock\") pod \"kube-proxy-47ffq\" (UID: \"4d0ad589-9b82-4077-8e60-45d6526bcf5e\") " pod="kube-system/kube-proxy-47ffq" Nov 6 00:24:04.915762 kubelet[3152]: I1106 00:24:04.915732 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgz5b\" (UniqueName: \"kubernetes.io/projected/4d0ad589-9b82-4077-8e60-45d6526bcf5e-kube-api-access-wgz5b\") pod \"kube-proxy-47ffq\" (UID: \"4d0ad589-9b82-4077-8e60-45d6526bcf5e\") " pod="kube-system/kube-proxy-47ffq" Nov 6 00:24:04.915843 kubelet[3152]: I1106 00:24:04.915835 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-etc-cni-netd\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.915919 kubelet[3152]: I1106 00:24:04.915910 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-lib-modules\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.917257 kubelet[3152]: I1106 00:24:04.917065 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-run\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.917257 kubelet[3152]: I1106 00:24:04.917193 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-cgroup\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.917257 kubelet[3152]: I1106 00:24:04.917213 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-clustermesh-secrets\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.917257 kubelet[3152]: I1106 00:24:04.917232 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-net\") pod \"cilium-vkd4l\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " pod="kube-system/cilium-vkd4l" Nov 6 00:24:04.921609 systemd[1]: Created slice kubepods-burstable-podae91e7a6_99cf_410e_9207_a19ac2f02a4d.slice - libcontainer container kubepods-burstable-podae91e7a6_99cf_410e_9207_a19ac2f02a4d.slice. Nov 6 00:24:05.230094 containerd[1693]: time="2025-11-06T00:24:05.228327912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47ffq,Uid:4d0ad589-9b82-4077-8e60-45d6526bcf5e,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:05.233236 containerd[1693]: time="2025-11-06T00:24:05.233202857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vkd4l,Uid:ae91e7a6-99cf-410e-9207-a19ac2f02a4d,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:05.247635 systemd[1]: Created slice kubepods-besteffort-pod87b575a2_4213_4c7a_aee0_4b5b182e931f.slice - libcontainer container kubepods-besteffort-pod87b575a2_4213_4c7a_aee0_4b5b182e931f.slice. Nov 6 00:24:05.302811 containerd[1693]: time="2025-11-06T00:24:05.302746059Z" level=info msg="connecting to shim a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3" address="unix:///run/containerd/s/4e1491ca86a0f4aa752808a7ca890fa491f4ad91dcd42a6a6ecc97de7dd611cf" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:05.305286 containerd[1693]: time="2025-11-06T00:24:05.305093776Z" level=info msg="connecting to shim 943f378613cfcb9fce71f336ea29acaeedeeed45cc5bdc3f182c4ef261105764" address="unix:///run/containerd/s/1b3b2e0704b5013ed049a7a9ec5b7fdb4a4cf1c36f3939653e2c7e4bab9ddfda" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:05.321767 kubelet[3152]: I1106 00:24:05.321740 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87b575a2-4213-4c7a-aee0-4b5b182e931f-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-drgrt\" (UID: \"87b575a2-4213-4c7a-aee0-4b5b182e931f\") " pod="kube-system/cilium-operator-6f9c7c5859-drgrt" Nov 6 00:24:05.322142 kubelet[3152]: I1106 00:24:05.321779 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89rq\" (UniqueName: \"kubernetes.io/projected/87b575a2-4213-4c7a-aee0-4b5b182e931f-kube-api-access-t89rq\") pod \"cilium-operator-6f9c7c5859-drgrt\" (UID: \"87b575a2-4213-4c7a-aee0-4b5b182e931f\") " pod="kube-system/cilium-operator-6f9c7c5859-drgrt" Nov 6 00:24:05.338094 systemd[1]: Started cri-containerd-943f378613cfcb9fce71f336ea29acaeedeeed45cc5bdc3f182c4ef261105764.scope - libcontainer container 943f378613cfcb9fce71f336ea29acaeedeeed45cc5bdc3f182c4ef261105764. Nov 6 00:24:05.339612 systemd[1]: Started cri-containerd-a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3.scope - libcontainer container a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3. Nov 6 00:24:05.370172 containerd[1693]: time="2025-11-06T00:24:05.370147235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47ffq,Uid:4d0ad589-9b82-4077-8e60-45d6526bcf5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"943f378613cfcb9fce71f336ea29acaeedeeed45cc5bdc3f182c4ef261105764\"" Nov 6 00:24:05.374330 containerd[1693]: time="2025-11-06T00:24:05.374310093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vkd4l,Uid:ae91e7a6-99cf-410e-9207-a19ac2f02a4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\"" Nov 6 00:24:05.376079 containerd[1693]: time="2025-11-06T00:24:05.376047180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 00:24:05.378787 containerd[1693]: time="2025-11-06T00:24:05.378480652Z" level=info msg="CreateContainer within sandbox \"943f378613cfcb9fce71f336ea29acaeedeeed45cc5bdc3f182c4ef261105764\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:24:05.398455 containerd[1693]: time="2025-11-06T00:24:05.398429729Z" level=info msg="Container d7c6f285f7d077d427558a0abbdf0727e5f804d7d389750b61368702545747e9: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:05.411823 containerd[1693]: time="2025-11-06T00:24:05.411626984Z" level=info msg="CreateContainer within sandbox \"943f378613cfcb9fce71f336ea29acaeedeeed45cc5bdc3f182c4ef261105764\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7c6f285f7d077d427558a0abbdf0727e5f804d7d389750b61368702545747e9\"" Nov 6 00:24:05.412092 containerd[1693]: time="2025-11-06T00:24:05.412068962Z" level=info msg="StartContainer for \"d7c6f285f7d077d427558a0abbdf0727e5f804d7d389750b61368702545747e9\"" Nov 6 00:24:05.413804 containerd[1693]: time="2025-11-06T00:24:05.413659074Z" level=info msg="connecting to shim d7c6f285f7d077d427558a0abbdf0727e5f804d7d389750b61368702545747e9" address="unix:///run/containerd/s/1b3b2e0704b5013ed049a7a9ec5b7fdb4a4cf1c36f3939653e2c7e4bab9ddfda" protocol=ttrpc version=3 Nov 6 00:24:05.447086 systemd[1]: Started cri-containerd-d7c6f285f7d077d427558a0abbdf0727e5f804d7d389750b61368702545747e9.scope - libcontainer container d7c6f285f7d077d427558a0abbdf0727e5f804d7d389750b61368702545747e9. Nov 6 00:24:05.477676 containerd[1693]: time="2025-11-06T00:24:05.477651281Z" level=info msg="StartContainer for \"d7c6f285f7d077d427558a0abbdf0727e5f804d7d389750b61368702545747e9\" returns successfully" Nov 6 00:24:05.560636 containerd[1693]: time="2025-11-06T00:24:05.560566601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-drgrt,Uid:87b575a2-4213-4c7a-aee0-4b5b182e931f,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:05.594089 containerd[1693]: time="2025-11-06T00:24:05.594060115Z" level=info msg="connecting to shim 62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed" address="unix:///run/containerd/s/4b5a3d6b3783d64f97778cd69521b3e0698045582b7ce64e424fd9f66d69441c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:05.616158 systemd[1]: Started cri-containerd-62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed.scope - libcontainer container 62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed. Nov 6 00:24:05.673351 containerd[1693]: time="2025-11-06T00:24:05.673302495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-drgrt,Uid:87b575a2-4213-4c7a-aee0-4b5b182e931f,Namespace:kube-system,Attempt:0,} returns sandbox id \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\"" Nov 6 00:24:08.618938 kubelet[3152]: I1106 00:24:08.618867 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-47ffq" podStartSLOduration=4.61884913 podStartE2EDuration="4.61884913s" podCreationTimestamp="2025-11-06 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:05.645160666 +0000 UTC m=+7.151728466" watchObservedRunningTime="2025-11-06 00:24:08.61884913 +0000 UTC m=+10.125416934" Nov 6 00:24:09.913643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018332484.mount: Deactivated successfully. Nov 6 00:24:11.459902 containerd[1693]: time="2025-11-06T00:24:11.459856702Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:11.461964 containerd[1693]: time="2025-11-06T00:24:11.461904549Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 00:24:11.464118 containerd[1693]: time="2025-11-06T00:24:11.464095215Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:11.465281 containerd[1693]: time="2025-11-06T00:24:11.465191611Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.089072968s" Nov 6 00:24:11.465281 containerd[1693]: time="2025-11-06T00:24:11.465222357Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 00:24:11.466291 containerd[1693]: time="2025-11-06T00:24:11.466213346Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 00:24:11.471604 containerd[1693]: time="2025-11-06T00:24:11.471576807Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:24:11.487484 containerd[1693]: time="2025-11-06T00:24:11.487459046Z" level=info msg="Container 7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:11.505037 containerd[1693]: time="2025-11-06T00:24:11.505014119Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\"" Nov 6 00:24:11.505579 containerd[1693]: time="2025-11-06T00:24:11.505535532Z" level=info msg="StartContainer for \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\"" Nov 6 00:24:11.506597 containerd[1693]: time="2025-11-06T00:24:11.506562598Z" level=info msg="connecting to shim 7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c" address="unix:///run/containerd/s/4e1491ca86a0f4aa752808a7ca890fa491f4ad91dcd42a6a6ecc97de7dd611cf" protocol=ttrpc version=3 Nov 6 00:24:11.528115 systemd[1]: Started cri-containerd-7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c.scope - libcontainer container 7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c. Nov 6 00:24:11.556552 containerd[1693]: time="2025-11-06T00:24:11.556515257Z" level=info msg="StartContainer for \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" returns successfully" Nov 6 00:24:11.564298 systemd[1]: cri-containerd-7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c.scope: Deactivated successfully. Nov 6 00:24:11.566614 containerd[1693]: time="2025-11-06T00:24:11.566580803Z" level=info msg="received exit event container_id:\"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" id:\"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" pid:3570 exited_at:{seconds:1762388651 nanos:565772388}" Nov 6 00:24:11.567099 containerd[1693]: time="2025-11-06T00:24:11.566770651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" id:\"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" pid:3570 exited_at:{seconds:1762388651 nanos:565772388}" Nov 6 00:24:11.581317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c-rootfs.mount: Deactivated successfully. Nov 6 00:24:15.272878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount745100992.mount: Deactivated successfully. Nov 6 00:24:15.694673 containerd[1693]: time="2025-11-06T00:24:15.694039555Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:24:15.727605 containerd[1693]: time="2025-11-06T00:24:15.727563687Z" level=info msg="Container d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:15.733039 containerd[1693]: time="2025-11-06T00:24:15.732779899Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:15.740010 containerd[1693]: time="2025-11-06T00:24:15.739980904Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 00:24:15.745335 containerd[1693]: time="2025-11-06T00:24:15.745298509Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:15.746049 containerd[1693]: time="2025-11-06T00:24:15.745981867Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\"" Nov 6 00:24:15.746191 containerd[1693]: time="2025-11-06T00:24:15.746167854Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.279925984s" Nov 6 00:24:15.746229 containerd[1693]: time="2025-11-06T00:24:15.746198535Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 00:24:15.746979 containerd[1693]: time="2025-11-06T00:24:15.746943212Z" level=info msg="StartContainer for \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\"" Nov 6 00:24:15.747828 containerd[1693]: time="2025-11-06T00:24:15.747778840Z" level=info msg="connecting to shim d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5" address="unix:///run/containerd/s/4e1491ca86a0f4aa752808a7ca890fa491f4ad91dcd42a6a6ecc97de7dd611cf" protocol=ttrpc version=3 Nov 6 00:24:15.754306 containerd[1693]: time="2025-11-06T00:24:15.754281533Z" level=info msg="CreateContainer within sandbox \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 00:24:15.769914 containerd[1693]: time="2025-11-06T00:24:15.769516659Z" level=info msg="Container 61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:15.770133 systemd[1]: Started cri-containerd-d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5.scope - libcontainer container d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5. Nov 6 00:24:15.799023 containerd[1693]: time="2025-11-06T00:24:15.798992559Z" level=info msg="CreateContainer within sandbox \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\"" Nov 6 00:24:15.800146 containerd[1693]: time="2025-11-06T00:24:15.800122830Z" level=info msg="StartContainer for \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\"" Nov 6 00:24:15.800266 containerd[1693]: time="2025-11-06T00:24:15.800136614Z" level=info msg="StartContainer for \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" returns successfully" Nov 6 00:24:15.800915 containerd[1693]: time="2025-11-06T00:24:15.800827070Z" level=info msg="connecting to shim 61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d" address="unix:///run/containerd/s/4b5a3d6b3783d64f97778cd69521b3e0698045582b7ce64e424fd9f66d69441c" protocol=ttrpc version=3 Nov 6 00:24:15.813036 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:24:15.813464 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:24:15.813887 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:24:15.816227 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:24:15.818358 systemd[1]: cri-containerd-d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5.scope: Deactivated successfully. Nov 6 00:24:15.819658 containerd[1693]: time="2025-11-06T00:24:15.819635076Z" level=info msg="received exit event container_id:\"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" id:\"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" pid:3634 exited_at:{seconds:1762388655 nanos:819382049}" Nov 6 00:24:15.820438 containerd[1693]: time="2025-11-06T00:24:15.820408441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" id:\"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" pid:3634 exited_at:{seconds:1762388655 nanos:819382049}" Nov 6 00:24:15.829768 systemd[1]: Started cri-containerd-61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d.scope - libcontainer container 61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d. Nov 6 00:24:15.843656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:24:15.993971 containerd[1693]: time="2025-11-06T00:24:15.993105360Z" level=info msg="StartContainer for \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" returns successfully" Nov 6 00:24:16.271489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5-rootfs.mount: Deactivated successfully. Nov 6 00:24:16.668096 containerd[1693]: time="2025-11-06T00:24:16.668059235Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:24:16.691734 containerd[1693]: time="2025-11-06T00:24:16.691696712Z" level=info msg="Container 2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:16.716547 containerd[1693]: time="2025-11-06T00:24:16.716502273Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\"" Nov 6 00:24:16.719988 containerd[1693]: time="2025-11-06T00:24:16.719295139Z" level=info msg="StartContainer for \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\"" Nov 6 00:24:16.720988 containerd[1693]: time="2025-11-06T00:24:16.720948568Z" level=info msg="connecting to shim 2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e" address="unix:///run/containerd/s/4e1491ca86a0f4aa752808a7ca890fa491f4ad91dcd42a6a6ecc97de7dd611cf" protocol=ttrpc version=3 Nov 6 00:24:16.746203 systemd[1]: Started cri-containerd-2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e.scope - libcontainer container 2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e. Nov 6 00:24:16.795080 containerd[1693]: time="2025-11-06T00:24:16.795043054Z" level=info msg="StartContainer for \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" returns successfully" Nov 6 00:24:16.801341 systemd[1]: cri-containerd-2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e.scope: Deactivated successfully. Nov 6 00:24:16.804538 containerd[1693]: time="2025-11-06T00:24:16.804480149Z" level=info msg="received exit event container_id:\"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" id:\"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" pid:3713 exited_at:{seconds:1762388656 nanos:804282229}" Nov 6 00:24:16.804817 containerd[1693]: time="2025-11-06T00:24:16.804796066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" id:\"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" pid:3713 exited_at:{seconds:1762388656 nanos:804282229}" Nov 6 00:24:16.826189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e-rootfs.mount: Deactivated successfully. Nov 6 00:24:17.676421 containerd[1693]: time="2025-11-06T00:24:17.676380707Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:24:17.691510 kubelet[3152]: I1106 00:24:17.691130 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-drgrt" podStartSLOduration=2.617912257 podStartE2EDuration="12.69111319s" podCreationTimestamp="2025-11-06 00:24:05 +0000 UTC" firstStartedPulling="2025-11-06 00:24:05.675009572 +0000 UTC m=+7.181577370" lastFinishedPulling="2025-11-06 00:24:15.748210518 +0000 UTC m=+17.254778303" observedRunningTime="2025-11-06 00:24:16.71355608 +0000 UTC m=+18.220123881" watchObservedRunningTime="2025-11-06 00:24:17.69111319 +0000 UTC m=+19.197681032" Nov 6 00:24:17.697679 containerd[1693]: time="2025-11-06T00:24:17.697463037Z" level=info msg="Container b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:17.719241 containerd[1693]: time="2025-11-06T00:24:17.719211708Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\"" Nov 6 00:24:17.720765 containerd[1693]: time="2025-11-06T00:24:17.720735651Z" level=info msg="StartContainer for \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\"" Nov 6 00:24:17.721564 containerd[1693]: time="2025-11-06T00:24:17.721539900Z" level=info msg="connecting to shim b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46" address="unix:///run/containerd/s/4e1491ca86a0f4aa752808a7ca890fa491f4ad91dcd42a6a6ecc97de7dd611cf" protocol=ttrpc version=3 Nov 6 00:24:17.748106 systemd[1]: Started cri-containerd-b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46.scope - libcontainer container b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46. Nov 6 00:24:17.767614 systemd[1]: cri-containerd-b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46.scope: Deactivated successfully. Nov 6 00:24:17.769194 containerd[1693]: time="2025-11-06T00:24:17.769097966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" id:\"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" pid:3754 exited_at:{seconds:1762388657 nanos:768784795}" Nov 6 00:24:17.772448 containerd[1693]: time="2025-11-06T00:24:17.772339640Z" level=info msg="received exit event container_id:\"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" id:\"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" pid:3754 exited_at:{seconds:1762388657 nanos:768784795}" Nov 6 00:24:17.778887 containerd[1693]: time="2025-11-06T00:24:17.778867483Z" level=info msg="StartContainer for \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" returns successfully" Nov 6 00:24:17.788324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46-rootfs.mount: Deactivated successfully. Nov 6 00:24:18.683493 containerd[1693]: time="2025-11-06T00:24:18.683025347Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:24:18.705867 containerd[1693]: time="2025-11-06T00:24:18.705833039Z" level=info msg="Container b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:18.720514 containerd[1693]: time="2025-11-06T00:24:18.720485154Z" level=info msg="CreateContainer within sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\"" Nov 6 00:24:18.721554 containerd[1693]: time="2025-11-06T00:24:18.721268070Z" level=info msg="StartContainer for \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\"" Nov 6 00:24:18.722338 containerd[1693]: time="2025-11-06T00:24:18.722310339Z" level=info msg="connecting to shim b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3" address="unix:///run/containerd/s/4e1491ca86a0f4aa752808a7ca890fa491f4ad91dcd42a6a6ecc97de7dd611cf" protocol=ttrpc version=3 Nov 6 00:24:18.744085 systemd[1]: Started cri-containerd-b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3.scope - libcontainer container b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3. Nov 6 00:24:18.772744 containerd[1693]: time="2025-11-06T00:24:18.772720207Z" level=info msg="StartContainer for \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" returns successfully" Nov 6 00:24:18.838235 containerd[1693]: time="2025-11-06T00:24:18.838204202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" id:\"6f6f0b75bfa13bc6ec85fcccd643b9b7d0e226567065e6940935db87c2efc53c\" pid:3822 exited_at:{seconds:1762388658 nanos:837775139}" Nov 6 00:24:18.898215 kubelet[3152]: I1106 00:24:18.898194 3152 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 00:24:18.952560 systemd[1]: Created slice kubepods-burstable-podaa4f9258_1412_4b96_8f03_51fb29c7553c.slice - libcontainer container kubepods-burstable-podaa4f9258_1412_4b96_8f03_51fb29c7553c.slice. Nov 6 00:24:18.959002 systemd[1]: Created slice kubepods-burstable-poddb4e0a52_bdc7_4fe0_b0be_db38bfd13c11.slice - libcontainer container kubepods-burstable-poddb4e0a52_bdc7_4fe0_b0be_db38bfd13c11.slice. Nov 6 00:24:19.008918 kubelet[3152]: I1106 00:24:19.008863 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhrj\" (UniqueName: \"kubernetes.io/projected/aa4f9258-1412-4b96-8f03-51fb29c7553c-kube-api-access-nwhrj\") pod \"coredns-66bc5c9577-2grss\" (UID: \"aa4f9258-1412-4b96-8f03-51fb29c7553c\") " pod="kube-system/coredns-66bc5c9577-2grss" Nov 6 00:24:19.009129 kubelet[3152]: I1106 00:24:19.009012 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db4e0a52-bdc7-4fe0-b0be-db38bfd13c11-config-volume\") pod \"coredns-66bc5c9577-rfbd6\" (UID: \"db4e0a52-bdc7-4fe0-b0be-db38bfd13c11\") " pod="kube-system/coredns-66bc5c9577-rfbd6" Nov 6 00:24:19.009129 kubelet[3152]: I1106 00:24:19.009038 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvfh\" (UniqueName: \"kubernetes.io/projected/db4e0a52-bdc7-4fe0-b0be-db38bfd13c11-kube-api-access-9cvfh\") pod \"coredns-66bc5c9577-rfbd6\" (UID: \"db4e0a52-bdc7-4fe0-b0be-db38bfd13c11\") " pod="kube-system/coredns-66bc5c9577-rfbd6" Nov 6 00:24:19.009358 kubelet[3152]: I1106 00:24:19.009280 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4f9258-1412-4b96-8f03-51fb29c7553c-config-volume\") pod \"coredns-66bc5c9577-2grss\" (UID: \"aa4f9258-1412-4b96-8f03-51fb29c7553c\") " pod="kube-system/coredns-66bc5c9577-2grss" Nov 6 00:24:19.263164 containerd[1693]: time="2025-11-06T00:24:19.263061769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2grss,Uid:aa4f9258-1412-4b96-8f03-51fb29c7553c,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:19.266909 containerd[1693]: time="2025-11-06T00:24:19.266653107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rfbd6,Uid:db4e0a52-bdc7-4fe0-b0be-db38bfd13c11,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:19.695882 kubelet[3152]: I1106 00:24:19.695789 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vkd4l" podStartSLOduration=9.60514801 podStartE2EDuration="15.695772196s" podCreationTimestamp="2025-11-06 00:24:04 +0000 UTC" firstStartedPulling="2025-11-06 00:24:05.375467785 +0000 UTC m=+6.882035573" lastFinishedPulling="2025-11-06 00:24:11.466091968 +0000 UTC m=+12.972659759" observedRunningTime="2025-11-06 00:24:19.695571192 +0000 UTC m=+21.202138996" watchObservedRunningTime="2025-11-06 00:24:19.695772196 +0000 UTC m=+21.202339998" Nov 6 00:24:21.122229 systemd-networkd[1447]: cilium_host: Link UP Nov 6 00:24:21.123995 systemd-networkd[1447]: cilium_net: Link UP Nov 6 00:24:21.124310 systemd-networkd[1447]: cilium_net: Gained carrier Nov 6 00:24:21.124788 systemd-networkd[1447]: cilium_host: Gained carrier Nov 6 00:24:21.125569 systemd-networkd[1447]: cilium_net: Gained IPv6LL Nov 6 00:24:21.328037 systemd-networkd[1447]: cilium_vxlan: Link UP Nov 6 00:24:21.328229 systemd-networkd[1447]: cilium_vxlan: Gained carrier Nov 6 00:24:21.636998 kernel: NET: Registered PF_ALG protocol family Nov 6 00:24:21.813089 systemd-networkd[1447]: cilium_host: Gained IPv6LL Nov 6 00:24:22.372559 systemd-networkd[1447]: lxc_health: Link UP Nov 6 00:24:22.386425 systemd-networkd[1447]: lxc_health: Gained carrier Nov 6 00:24:22.581234 systemd-networkd[1447]: cilium_vxlan: Gained IPv6LL Nov 6 00:24:22.813443 kernel: eth0: renamed from tmpd8506 Nov 6 00:24:22.814644 systemd-networkd[1447]: lxc52547a19728a: Link UP Nov 6 00:24:22.817037 systemd-networkd[1447]: lxc52547a19728a: Gained carrier Nov 6 00:24:22.832027 systemd-networkd[1447]: lxc875dcec3d1bd: Link UP Nov 6 00:24:22.836970 kernel: eth0: renamed from tmpa7a6a Nov 6 00:24:22.841423 systemd-networkd[1447]: lxc875dcec3d1bd: Gained carrier Nov 6 00:24:23.605145 systemd-networkd[1447]: lxc_health: Gained IPv6LL Nov 6 00:24:24.693213 systemd-networkd[1447]: lxc52547a19728a: Gained IPv6LL Nov 6 00:24:24.694030 systemd-networkd[1447]: lxc875dcec3d1bd: Gained IPv6LL Nov 6 00:24:25.799190 containerd[1693]: time="2025-11-06T00:24:25.799112227Z" level=info msg="connecting to shim a7a6a70e89c60f7d8c2823f3205f4690c1992c1e0580fbea578bf2f2c33c1dd2" address="unix:///run/containerd/s/48766f70b8a6d0e3ad4b9ffb02e8604447f7a0ca843f7af846943ed0fe97d336" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:25.810457 containerd[1693]: time="2025-11-06T00:24:25.810413847Z" level=info msg="connecting to shim d8506e9a692706dcbacb112d51d2774d0a1ce10a6915541ac94dd788a797a0b8" address="unix:///run/containerd/s/7b70d932c872003f3fffe354767c742dd8d136ade0ba4b91873b6e0ac7ebd88a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:25.832081 systemd[1]: Started cri-containerd-a7a6a70e89c60f7d8c2823f3205f4690c1992c1e0580fbea578bf2f2c33c1dd2.scope - libcontainer container a7a6a70e89c60f7d8c2823f3205f4690c1992c1e0580fbea578bf2f2c33c1dd2. Nov 6 00:24:25.836665 systemd[1]: Started cri-containerd-d8506e9a692706dcbacb112d51d2774d0a1ce10a6915541ac94dd788a797a0b8.scope - libcontainer container d8506e9a692706dcbacb112d51d2774d0a1ce10a6915541ac94dd788a797a0b8. Nov 6 00:24:25.890239 containerd[1693]: time="2025-11-06T00:24:25.890205552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rfbd6,Uid:db4e0a52-bdc7-4fe0-b0be-db38bfd13c11,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7a6a70e89c60f7d8c2823f3205f4690c1992c1e0580fbea578bf2f2c33c1dd2\"" Nov 6 00:24:25.893847 containerd[1693]: time="2025-11-06T00:24:25.893795575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2grss,Uid:aa4f9258-1412-4b96-8f03-51fb29c7553c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8506e9a692706dcbacb112d51d2774d0a1ce10a6915541ac94dd788a797a0b8\"" Nov 6 00:24:25.898716 containerd[1693]: time="2025-11-06T00:24:25.898656954Z" level=info msg="CreateContainer within sandbox \"a7a6a70e89c60f7d8c2823f3205f4690c1992c1e0580fbea578bf2f2c33c1dd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:24:25.902506 containerd[1693]: time="2025-11-06T00:24:25.902478364Z" level=info msg="CreateContainer within sandbox \"d8506e9a692706dcbacb112d51d2774d0a1ce10a6915541ac94dd788a797a0b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:24:25.923514 containerd[1693]: time="2025-11-06T00:24:25.923489353Z" level=info msg="Container 23a7fca6c7ac00fb650905e76d35fd169a36eff2268e216f1d697ad07977b6ad: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:25.925629 containerd[1693]: time="2025-11-06T00:24:25.925598430Z" level=info msg="Container f41a1c5be368bbed887438ffa41de72ba9d8452672011fbfbed090e57ab8b6d0: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:25.939240 containerd[1693]: time="2025-11-06T00:24:25.939215771Z" level=info msg="CreateContainer within sandbox \"a7a6a70e89c60f7d8c2823f3205f4690c1992c1e0580fbea578bf2f2c33c1dd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23a7fca6c7ac00fb650905e76d35fd169a36eff2268e216f1d697ad07977b6ad\"" Nov 6 00:24:25.939719 containerd[1693]: time="2025-11-06T00:24:25.939542458Z" level=info msg="StartContainer for \"23a7fca6c7ac00fb650905e76d35fd169a36eff2268e216f1d697ad07977b6ad\"" Nov 6 00:24:25.940910 containerd[1693]: time="2025-11-06T00:24:25.940876464Z" level=info msg="CreateContainer within sandbox \"d8506e9a692706dcbacb112d51d2774d0a1ce10a6915541ac94dd788a797a0b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f41a1c5be368bbed887438ffa41de72ba9d8452672011fbfbed090e57ab8b6d0\"" Nov 6 00:24:25.941344 containerd[1693]: time="2025-11-06T00:24:25.941325009Z" level=info msg="StartContainer for \"f41a1c5be368bbed887438ffa41de72ba9d8452672011fbfbed090e57ab8b6d0\"" Nov 6 00:24:25.941508 containerd[1693]: time="2025-11-06T00:24:25.941395744Z" level=info msg="connecting to shim 23a7fca6c7ac00fb650905e76d35fd169a36eff2268e216f1d697ad07977b6ad" address="unix:///run/containerd/s/48766f70b8a6d0e3ad4b9ffb02e8604447f7a0ca843f7af846943ed0fe97d336" protocol=ttrpc version=3 Nov 6 00:24:25.942379 containerd[1693]: time="2025-11-06T00:24:25.942355694Z" level=info msg="connecting to shim f41a1c5be368bbed887438ffa41de72ba9d8452672011fbfbed090e57ab8b6d0" address="unix:///run/containerd/s/7b70d932c872003f3fffe354767c742dd8d136ade0ba4b91873b6e0ac7ebd88a" protocol=ttrpc version=3 Nov 6 00:24:25.962088 systemd[1]: Started cri-containerd-23a7fca6c7ac00fb650905e76d35fd169a36eff2268e216f1d697ad07977b6ad.scope - libcontainer container 23a7fca6c7ac00fb650905e76d35fd169a36eff2268e216f1d697ad07977b6ad. Nov 6 00:24:25.964795 systemd[1]: Started cri-containerd-f41a1c5be368bbed887438ffa41de72ba9d8452672011fbfbed090e57ab8b6d0.scope - libcontainer container f41a1c5be368bbed887438ffa41de72ba9d8452672011fbfbed090e57ab8b6d0. Nov 6 00:24:26.023000 containerd[1693]: time="2025-11-06T00:24:26.022965242Z" level=info msg="StartContainer for \"f41a1c5be368bbed887438ffa41de72ba9d8452672011fbfbed090e57ab8b6d0\" returns successfully" Nov 6 00:24:26.024286 containerd[1693]: time="2025-11-06T00:24:26.024258441Z" level=info msg="StartContainer for \"23a7fca6c7ac00fb650905e76d35fd169a36eff2268e216f1d697ad07977b6ad\" returns successfully" Nov 6 00:24:26.710120 kubelet[3152]: I1106 00:24:26.710060 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rfbd6" podStartSLOduration=21.710033064 podStartE2EDuration="21.710033064s" podCreationTimestamp="2025-11-06 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:26.708709817 +0000 UTC m=+28.215277619" watchObservedRunningTime="2025-11-06 00:24:26.710033064 +0000 UTC m=+28.216600865" Nov 6 00:24:26.728476 kubelet[3152]: I1106 00:24:26.728332 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2grss" podStartSLOduration=21.728315127 podStartE2EDuration="21.728315127s" podCreationTimestamp="2025-11-06 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:26.726892296 +0000 UTC m=+28.233460101" watchObservedRunningTime="2025-11-06 00:24:26.728315127 +0000 UTC m=+28.234883128" Nov 6 00:24:26.778738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297032195.mount: Deactivated successfully. Nov 6 00:24:31.598656 kubelet[3152]: I1106 00:24:31.597903 3152 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:26:01.491652 systemd[1]: Started sshd@7-10.200.8.40:22-10.200.16.10:42256.service - OpenSSH per-connection server daemon (10.200.16.10:42256). Nov 6 00:26:02.121977 sshd[4482]: Accepted publickey for core from 10.200.16.10 port 42256 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:02.123174 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:02.129828 systemd-logind[1676]: New session 10 of user core. Nov 6 00:26:02.135103 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:26:02.685273 sshd[4485]: Connection closed by 10.200.16.10 port 42256 Nov 6 00:26:02.685771 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:02.688995 systemd[1]: sshd@7-10.200.8.40:22-10.200.16.10:42256.service: Deactivated successfully. Nov 6 00:26:02.690817 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:26:02.691931 systemd-logind[1676]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:26:02.692833 systemd-logind[1676]: Removed session 10. Nov 6 00:26:07.799997 systemd[1]: Started sshd@8-10.200.8.40:22-10.200.16.10:42268.service - OpenSSH per-connection server daemon (10.200.16.10:42268). Nov 6 00:26:08.431990 sshd[4500]: Accepted publickey for core from 10.200.16.10 port 42268 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:08.433086 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:08.437456 systemd-logind[1676]: New session 11 of user core. Nov 6 00:26:08.445078 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:26:08.923128 sshd[4503]: Connection closed by 10.200.16.10 port 42268 Nov 6 00:26:08.923664 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:08.927046 systemd[1]: sshd@8-10.200.8.40:22-10.200.16.10:42268.service: Deactivated successfully. Nov 6 00:26:08.928858 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:26:08.929621 systemd-logind[1676]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:26:08.930786 systemd-logind[1676]: Removed session 11. Nov 6 00:26:14.045255 systemd[1]: Started sshd@9-10.200.8.40:22-10.200.16.10:47298.service - OpenSSH per-connection server daemon (10.200.16.10:47298). Nov 6 00:26:14.676519 sshd[4516]: Accepted publickey for core from 10.200.16.10 port 47298 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:14.677702 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:14.682124 systemd-logind[1676]: New session 12 of user core. Nov 6 00:26:14.688092 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:26:15.169495 sshd[4519]: Connection closed by 10.200.16.10 port 47298 Nov 6 00:26:15.170092 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:15.173292 systemd[1]: sshd@9-10.200.8.40:22-10.200.16.10:47298.service: Deactivated successfully. Nov 6 00:26:15.175339 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:26:15.176187 systemd-logind[1676]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:26:15.177689 systemd-logind[1676]: Removed session 12. Nov 6 00:26:20.280145 systemd[1]: Started sshd@10-10.200.8.40:22-10.200.16.10:60828.service - OpenSSH per-connection server daemon (10.200.16.10:60828). Nov 6 00:26:20.911106 sshd[4532]: Accepted publickey for core from 10.200.16.10 port 60828 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:20.912253 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:20.916758 systemd-logind[1676]: New session 13 of user core. Nov 6 00:26:20.925083 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:26:21.409756 sshd[4535]: Connection closed by 10.200.16.10 port 60828 Nov 6 00:26:21.410316 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:21.413608 systemd[1]: sshd@10-10.200.8.40:22-10.200.16.10:60828.service: Deactivated successfully. Nov 6 00:26:21.415439 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:26:21.416564 systemd-logind[1676]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:26:21.417668 systemd-logind[1676]: Removed session 13. Nov 6 00:26:26.531412 systemd[1]: Started sshd@11-10.200.8.40:22-10.200.16.10:60844.service - OpenSSH per-connection server daemon (10.200.16.10:60844). Nov 6 00:26:27.167569 sshd[4549]: Accepted publickey for core from 10.200.16.10 port 60844 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:27.168723 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:27.173345 systemd-logind[1676]: New session 14 of user core. Nov 6 00:26:27.179117 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:26:27.659364 sshd[4552]: Connection closed by 10.200.16.10 port 60844 Nov 6 00:26:27.660149 sshd-session[4549]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:27.663682 systemd[1]: sshd@11-10.200.8.40:22-10.200.16.10:60844.service: Deactivated successfully. Nov 6 00:26:27.665868 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:26:27.666620 systemd-logind[1676]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:26:27.668017 systemd-logind[1676]: Removed session 14. Nov 6 00:26:27.769869 systemd[1]: Started sshd@12-10.200.8.40:22-10.200.16.10:60852.service - OpenSSH per-connection server daemon (10.200.16.10:60852). Nov 6 00:26:28.397963 sshd[4565]: Accepted publickey for core from 10.200.16.10 port 60852 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:28.399058 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:28.403391 systemd-logind[1676]: New session 15 of user core. Nov 6 00:26:28.408095 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:26:28.925885 sshd[4568]: Connection closed by 10.200.16.10 port 60852 Nov 6 00:26:28.926450 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:28.929743 systemd[1]: sshd@12-10.200.8.40:22-10.200.16.10:60852.service: Deactivated successfully. Nov 6 00:26:28.931527 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:26:28.932361 systemd-logind[1676]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:26:28.933552 systemd-logind[1676]: Removed session 15. Nov 6 00:26:29.045116 systemd[1]: Started sshd@13-10.200.8.40:22-10.200.16.10:60864.service - OpenSSH per-connection server daemon (10.200.16.10:60864). Nov 6 00:26:29.673828 sshd[4578]: Accepted publickey for core from 10.200.16.10 port 60864 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:29.675716 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:29.679816 systemd-logind[1676]: New session 16 of user core. Nov 6 00:26:29.685117 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:26:30.173594 sshd[4581]: Connection closed by 10.200.16.10 port 60864 Nov 6 00:26:30.174163 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:30.177697 systemd[1]: sshd@13-10.200.8.40:22-10.200.16.10:60864.service: Deactivated successfully. Nov 6 00:26:30.179527 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:26:30.180468 systemd-logind[1676]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:26:30.181929 systemd-logind[1676]: Removed session 16. Nov 6 00:26:35.303157 systemd[1]: Started sshd@14-10.200.8.40:22-10.200.16.10:49106.service - OpenSSH per-connection server daemon (10.200.16.10:49106). Nov 6 00:26:35.956720 sshd[4593]: Accepted publickey for core from 10.200.16.10 port 49106 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:35.957848 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:35.962400 systemd-logind[1676]: New session 17 of user core. Nov 6 00:26:35.970103 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:26:36.457502 sshd[4598]: Connection closed by 10.200.16.10 port 49106 Nov 6 00:26:36.459411 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:36.462598 systemd[1]: sshd@14-10.200.8.40:22-10.200.16.10:49106.service: Deactivated successfully. Nov 6 00:26:36.464796 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:26:36.465861 systemd-logind[1676]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:26:36.468075 systemd-logind[1676]: Removed session 17. Nov 6 00:26:36.571281 systemd[1]: Started sshd@15-10.200.8.40:22-10.200.16.10:49120.service - OpenSSH per-connection server daemon (10.200.16.10:49120). Nov 6 00:26:37.200179 sshd[4610]: Accepted publickey for core from 10.200.16.10 port 49120 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:37.201258 sshd-session[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:37.205283 systemd-logind[1676]: New session 18 of user core. Nov 6 00:26:37.209128 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:26:37.750404 sshd[4613]: Connection closed by 10.200.16.10 port 49120 Nov 6 00:26:37.750872 sshd-session[4610]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:37.754358 systemd-logind[1676]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:26:37.754617 systemd[1]: sshd@15-10.200.8.40:22-10.200.16.10:49120.service: Deactivated successfully. Nov 6 00:26:37.756603 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:26:37.758381 systemd-logind[1676]: Removed session 18. Nov 6 00:26:37.864900 systemd[1]: Started sshd@16-10.200.8.40:22-10.200.16.10:49134.service - OpenSSH per-connection server daemon (10.200.16.10:49134). Nov 6 00:26:38.491212 sshd[4623]: Accepted publickey for core from 10.200.16.10 port 49134 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:38.492305 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:38.496778 systemd-logind[1676]: New session 19 of user core. Nov 6 00:26:38.500092 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:26:39.458244 sshd[4626]: Connection closed by 10.200.16.10 port 49134 Nov 6 00:26:39.458786 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:39.462522 systemd-logind[1676]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:26:39.463208 systemd[1]: sshd@16-10.200.8.40:22-10.200.16.10:49134.service: Deactivated successfully. Nov 6 00:26:39.465201 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:26:39.466592 systemd-logind[1676]: Removed session 19. Nov 6 00:26:39.573085 systemd[1]: Started sshd@17-10.200.8.40:22-10.200.16.10:49138.service - OpenSSH per-connection server daemon (10.200.16.10:49138). Nov 6 00:26:40.200835 sshd[4641]: Accepted publickey for core from 10.200.16.10 port 49138 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:40.201910 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:40.206247 systemd-logind[1676]: New session 20 of user core. Nov 6 00:26:40.210133 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:26:40.774577 sshd[4644]: Connection closed by 10.200.16.10 port 49138 Nov 6 00:26:40.775149 sshd-session[4641]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:40.778891 systemd[1]: sshd@17-10.200.8.40:22-10.200.16.10:49138.service: Deactivated successfully. Nov 6 00:26:40.780819 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:26:40.781642 systemd-logind[1676]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:26:40.782904 systemd-logind[1676]: Removed session 20. Nov 6 00:26:40.889837 systemd[1]: Started sshd@18-10.200.8.40:22-10.200.16.10:53752.service - OpenSSH per-connection server daemon (10.200.16.10:53752). Nov 6 00:26:41.519842 sshd[4656]: Accepted publickey for core from 10.200.16.10 port 53752 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:41.521008 sshd-session[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:41.527354 systemd-logind[1676]: New session 21 of user core. Nov 6 00:26:41.532132 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:26:42.020651 sshd[4659]: Connection closed by 10.200.16.10 port 53752 Nov 6 00:26:42.021240 sshd-session[4656]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:42.024027 systemd[1]: sshd@18-10.200.8.40:22-10.200.16.10:53752.service: Deactivated successfully. Nov 6 00:26:42.026449 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:26:42.027768 systemd-logind[1676]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:26:42.028807 systemd-logind[1676]: Removed session 21. Nov 6 00:26:47.138201 systemd[1]: Started sshd@19-10.200.8.40:22-10.200.16.10:53766.service - OpenSSH per-connection server daemon (10.200.16.10:53766). Nov 6 00:26:47.768242 sshd[4673]: Accepted publickey for core from 10.200.16.10 port 53766 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:47.769344 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:47.773748 systemd-logind[1676]: New session 22 of user core. Nov 6 00:26:47.778125 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:26:48.256840 sshd[4676]: Connection closed by 10.200.16.10 port 53766 Nov 6 00:26:48.258150 sshd-session[4673]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:48.261578 systemd-logind[1676]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:26:48.262324 systemd[1]: sshd@19-10.200.8.40:22-10.200.16.10:53766.service: Deactivated successfully. Nov 6 00:26:48.263926 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:26:48.265900 systemd-logind[1676]: Removed session 22. Nov 6 00:26:53.380339 systemd[1]: Started sshd@20-10.200.8.40:22-10.200.16.10:59046.service - OpenSSH per-connection server daemon (10.200.16.10:59046). Nov 6 00:26:54.013533 sshd[4688]: Accepted publickey for core from 10.200.16.10 port 59046 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:54.014709 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:54.019171 systemd-logind[1676]: New session 23 of user core. Nov 6 00:26:54.029075 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:26:54.501516 sshd[4691]: Connection closed by 10.200.16.10 port 59046 Nov 6 00:26:54.502079 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:54.505428 systemd[1]: sshd@20-10.200.8.40:22-10.200.16.10:59046.service: Deactivated successfully. Nov 6 00:26:54.507257 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:26:54.507913 systemd-logind[1676]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:26:54.509129 systemd-logind[1676]: Removed session 23. Nov 6 00:26:54.615218 systemd[1]: Started sshd@21-10.200.8.40:22-10.200.16.10:59052.service - OpenSSH per-connection server daemon (10.200.16.10:59052). Nov 6 00:26:55.249812 sshd[4703]: Accepted publickey for core from 10.200.16.10 port 59052 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:55.250865 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:55.255085 systemd-logind[1676]: New session 24 of user core. Nov 6 00:26:55.261117 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:26:56.889622 containerd[1693]: time="2025-11-06T00:26:56.889203621Z" level=info msg="StopContainer for \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" with timeout 30 (s)" Nov 6 00:26:56.891033 containerd[1693]: time="2025-11-06T00:26:56.890872023Z" level=info msg="Stop container \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" with signal terminated" Nov 6 00:26:56.903677 systemd[1]: cri-containerd-61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d.scope: Deactivated successfully. Nov 6 00:26:56.905689 containerd[1693]: time="2025-11-06T00:26:56.905629673Z" level=info msg="received exit event container_id:\"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" id:\"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" pid:3672 exited_at:{seconds:1762388816 nanos:904895275}" Nov 6 00:26:56.906144 containerd[1693]: time="2025-11-06T00:26:56.905889878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" id:\"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" pid:3672 exited_at:{seconds:1762388816 nanos:904895275}" Nov 6 00:26:56.922775 containerd[1693]: time="2025-11-06T00:26:56.922744172Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:26:56.928969 containerd[1693]: time="2025-11-06T00:26:56.928915982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" id:\"80ad319f4edc81b9304c044a2259e15d0d5278b3f0cfa1ac516792eb71a30214\" pid:4731 exited_at:{seconds:1762388816 nanos:928747190}" Nov 6 00:26:56.933122 containerd[1693]: time="2025-11-06T00:26:56.933076229Z" level=info msg="StopContainer for \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" with timeout 2 (s)" Nov 6 00:26:56.933716 containerd[1693]: time="2025-11-06T00:26:56.933669860Z" level=info msg="Stop container \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" with signal terminated" Nov 6 00:26:56.939272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d-rootfs.mount: Deactivated successfully. Nov 6 00:26:56.945449 systemd-networkd[1447]: lxc_health: Link DOWN Nov 6 00:26:56.945456 systemd-networkd[1447]: lxc_health: Lost carrier Nov 6 00:26:56.958422 systemd[1]: cri-containerd-b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3.scope: Deactivated successfully. Nov 6 00:26:56.958663 systemd[1]: cri-containerd-b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3.scope: Consumed 5.385s CPU time, 122.7M memory peak, 144K read from disk, 13.3M written to disk. Nov 6 00:26:56.960292 containerd[1693]: time="2025-11-06T00:26:56.960259767Z" level=info msg="received exit event container_id:\"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" id:\"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" pid:3792 exited_at:{seconds:1762388816 nanos:960128338}" Nov 6 00:26:56.960482 containerd[1693]: time="2025-11-06T00:26:56.960340195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" id:\"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" pid:3792 exited_at:{seconds:1762388816 nanos:960128338}" Nov 6 00:26:56.976709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3-rootfs.mount: Deactivated successfully. Nov 6 00:26:57.006387 containerd[1693]: time="2025-11-06T00:26:57.006308946Z" level=info msg="StopContainer for \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" returns successfully" Nov 6 00:26:57.006815 containerd[1693]: time="2025-11-06T00:26:57.006794071Z" level=info msg="StopPodSandbox for \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\"" Nov 6 00:26:57.006892 containerd[1693]: time="2025-11-06T00:26:57.006873061Z" level=info msg="Container to stop \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:26:57.010288 containerd[1693]: time="2025-11-06T00:26:57.010265029Z" level=info msg="StopContainer for \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" returns successfully" Nov 6 00:26:57.010688 containerd[1693]: time="2025-11-06T00:26:57.010669872Z" level=info msg="StopPodSandbox for \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\"" Nov 6 00:26:57.010803 containerd[1693]: time="2025-11-06T00:26:57.010789878Z" level=info msg="Container to stop \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:26:57.010848 containerd[1693]: time="2025-11-06T00:26:57.010839351Z" level=info msg="Container to stop \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:26:57.010887 containerd[1693]: time="2025-11-06T00:26:57.010879824Z" level=info msg="Container to stop \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:26:57.010925 containerd[1693]: time="2025-11-06T00:26:57.010917210Z" level=info msg="Container to stop \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:26:57.011174 containerd[1693]: time="2025-11-06T00:26:57.010985659Z" level=info msg="Container to stop \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:26:57.015845 systemd[1]: cri-containerd-62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed.scope: Deactivated successfully. Nov 6 00:26:57.019497 systemd[1]: cri-containerd-a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3.scope: Deactivated successfully. Nov 6 00:26:57.022184 containerd[1693]: time="2025-11-06T00:26:57.022160713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" id:\"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" pid:3391 exit_status:137 exited_at:{seconds:1762388817 nanos:19254548}" Nov 6 00:26:57.048559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3-rootfs.mount: Deactivated successfully. Nov 6 00:26:57.055632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed-rootfs.mount: Deactivated successfully. Nov 6 00:26:57.072453 containerd[1693]: time="2025-11-06T00:26:57.072423056Z" level=info msg="shim disconnected" id=62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed namespace=k8s.io Nov 6 00:26:57.072453 containerd[1693]: time="2025-11-06T00:26:57.072447552Z" level=warning msg="cleaning up after shim disconnected" id=62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed namespace=k8s.io Nov 6 00:26:57.072731 containerd[1693]: time="2025-11-06T00:26:57.072454790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:26:57.074526 containerd[1693]: time="2025-11-06T00:26:57.074368477Z" level=info msg="shim disconnected" id=a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3 namespace=k8s.io Nov 6 00:26:57.074643 containerd[1693]: time="2025-11-06T00:26:57.074601802Z" level=warning msg="cleaning up after shim disconnected" id=a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3 namespace=k8s.io Nov 6 00:26:57.074778 containerd[1693]: time="2025-11-06T00:26:57.074706144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:26:57.085768 containerd[1693]: time="2025-11-06T00:26:57.085742939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" id:\"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" pid:3296 exit_status:137 exited_at:{seconds:1762388817 nanos:21926594}" Nov 6 00:26:57.088043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed-shm.mount: Deactivated successfully. Nov 6 00:26:57.088327 containerd[1693]: time="2025-11-06T00:26:57.088047226Z" level=info msg="received exit event sandbox_id:\"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" exit_status:137 exited_at:{seconds:1762388817 nanos:21926594}" Nov 6 00:26:57.089438 containerd[1693]: time="2025-11-06T00:26:57.089314109Z" level=info msg="TearDown network for sandbox \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" successfully" Nov 6 00:26:57.089438 containerd[1693]: time="2025-11-06T00:26:57.089337062Z" level=info msg="StopPodSandbox for \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" returns successfully" Nov 6 00:26:57.089522 containerd[1693]: time="2025-11-06T00:26:57.089451841Z" level=info msg="received exit event sandbox_id:\"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" exit_status:137 exited_at:{seconds:1762388817 nanos:19254548}" Nov 6 00:26:57.090641 containerd[1693]: time="2025-11-06T00:26:57.090612568Z" level=info msg="TearDown network for sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" successfully" Nov 6 00:26:57.090702 containerd[1693]: time="2025-11-06T00:26:57.090641428Z" level=info msg="StopPodSandbox for \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" returns successfully" Nov 6 00:26:57.149029 kubelet[3152]: I1106 00:26:57.148725 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hubble-tls\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.151742 kubelet[3152]: I1106 00:26:57.149422 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-bpf-maps\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.151742 kubelet[3152]: I1106 00:26:57.149456 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mchq2\" (UniqueName: \"kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-kube-api-access-mchq2\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.151742 kubelet[3152]: I1106 00:26:57.149481 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t89rq\" (UniqueName: \"kubernetes.io/projected/87b575a2-4213-4c7a-aee0-4b5b182e931f-kube-api-access-t89rq\") pod \"87b575a2-4213-4c7a-aee0-4b5b182e931f\" (UID: \"87b575a2-4213-4c7a-aee0-4b5b182e931f\") " Nov 6 00:26:57.151742 kubelet[3152]: I1106 00:26:57.149479 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.151742 kubelet[3152]: I1106 00:26:57.149497 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hostproc\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.151742 kubelet[3152]: I1106 00:26:57.149511 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cni-path\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152000 kubelet[3152]: I1106 00:26:57.149527 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-xtables-lock\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152000 kubelet[3152]: I1106 00:26:57.149567 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-net\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152000 kubelet[3152]: I1106 00:26:57.149587 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87b575a2-4213-4c7a-aee0-4b5b182e931f-cilium-config-path\") pod \"87b575a2-4213-4c7a-aee0-4b5b182e931f\" (UID: \"87b575a2-4213-4c7a-aee0-4b5b182e931f\") " Nov 6 00:26:57.152000 kubelet[3152]: I1106 00:26:57.149605 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-config-path\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152000 kubelet[3152]: I1106 00:26:57.149624 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-clustermesh-secrets\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152000 kubelet[3152]: I1106 00:26:57.149642 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-etc-cni-netd\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152151 kubelet[3152]: I1106 00:26:57.149657 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-run\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152151 kubelet[3152]: I1106 00:26:57.149674 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-cgroup\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152151 kubelet[3152]: I1106 00:26:57.149694 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-kernel\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152151 kubelet[3152]: I1106 00:26:57.149711 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-lib-modules\") pod \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\" (UID: \"ae91e7a6-99cf-410e-9207-a19ac2f02a4d\") " Nov 6 00:26:57.152151 kubelet[3152]: I1106 00:26:57.149751 3152 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-bpf-maps\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.152151 kubelet[3152]: I1106 00:26:57.149770 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.152293 kubelet[3152]: I1106 00:26:57.149788 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.152293 kubelet[3152]: I1106 00:26:57.149801 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.152293 kubelet[3152]: I1106 00:26:57.149815 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.152293 kubelet[3152]: I1106 00:26:57.149827 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.154136 kubelet[3152]: I1106 00:26:57.154105 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b575a2-4213-4c7a-aee0-4b5b182e931f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "87b575a2-4213-4c7a-aee0-4b5b182e931f" (UID: "87b575a2-4213-4c7a-aee0-4b5b182e931f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:26:57.154285 kubelet[3152]: I1106 00:26:57.154254 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.154321 kubelet[3152]: I1106 00:26:57.154296 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.154352 kubelet[3152]: I1106 00:26:57.154323 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.154352 kubelet[3152]: I1106 00:26:57.154339 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:26:57.155130 kubelet[3152]: I1106 00:26:57.155098 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b575a2-4213-4c7a-aee0-4b5b182e931f-kube-api-access-t89rq" (OuterVolumeSpecName: "kube-api-access-t89rq") pod "87b575a2-4213-4c7a-aee0-4b5b182e931f" (UID: "87b575a2-4213-4c7a-aee0-4b5b182e931f"). InnerVolumeSpecName "kube-api-access-t89rq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:26:57.155193 kubelet[3152]: I1106 00:26:57.155172 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:26:57.156766 kubelet[3152]: I1106 00:26:57.156723 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:26:57.157010 kubelet[3152]: I1106 00:26:57.156932 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:26:57.157779 kubelet[3152]: I1106 00:26:57.157750 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-kube-api-access-mchq2" (OuterVolumeSpecName: "kube-api-access-mchq2") pod "ae91e7a6-99cf-410e-9207-a19ac2f02a4d" (UID: "ae91e7a6-99cf-410e-9207-a19ac2f02a4d"). InnerVolumeSpecName "kube-api-access-mchq2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:26:57.250095 kubelet[3152]: I1106 00:26:57.250056 3152 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mchq2\" (UniqueName: \"kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-kube-api-access-mchq2\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250095 kubelet[3152]: I1106 00:26:57.250090 3152 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t89rq\" (UniqueName: \"kubernetes.io/projected/87b575a2-4213-4c7a-aee0-4b5b182e931f-kube-api-access-t89rq\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250095 kubelet[3152]: I1106 00:26:57.250100 3152 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hostproc\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250108 3152 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cni-path\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250118 3152 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-xtables-lock\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250128 3152 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-net\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250135 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87b575a2-4213-4c7a-aee0-4b5b182e931f-cilium-config-path\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250145 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-config-path\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250155 3152 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-clustermesh-secrets\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250163 3152 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-etc-cni-netd\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250257 kubelet[3152]: I1106 00:26:57.250171 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-run\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250379 kubelet[3152]: I1106 00:26:57.250179 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-cilium-cgroup\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250379 kubelet[3152]: I1106 00:26:57.250188 3152 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-host-proc-sys-kernel\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250379 kubelet[3152]: I1106 00:26:57.250195 3152 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-lib-modules\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.250379 kubelet[3152]: I1106 00:26:57.250202 3152 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae91e7a6-99cf-410e-9207-a19ac2f02a4d-hubble-tls\") on node \"ci-4459.1.0-n-288d7afe99\" DevicePath \"\"" Nov 6 00:26:57.939173 systemd[1]: var-lib-kubelet-pods-87b575a2\x2d4213\x2d4c7a\x2daee0\x2d4b5b182e931f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt89rq.mount: Deactivated successfully. Nov 6 00:26:57.939275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3-shm.mount: Deactivated successfully. Nov 6 00:26:57.939337 systemd[1]: var-lib-kubelet-pods-ae91e7a6\x2d99cf\x2d410e\x2d9207\x2da19ac2f02a4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmchq2.mount: Deactivated successfully. Nov 6 00:26:57.939393 systemd[1]: var-lib-kubelet-pods-ae91e7a6\x2d99cf\x2d410e\x2d9207\x2da19ac2f02a4d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 00:26:57.939456 systemd[1]: var-lib-kubelet-pods-ae91e7a6\x2d99cf\x2d410e\x2d9207\x2da19ac2f02a4d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 00:26:57.980985 kubelet[3152]: I1106 00:26:57.980581 3152 scope.go:117] "RemoveContainer" containerID="61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d" Nov 6 00:26:57.985519 containerd[1693]: time="2025-11-06T00:26:57.984827090Z" level=info msg="RemoveContainer for \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\"" Nov 6 00:26:57.987383 systemd[1]: Removed slice kubepods-besteffort-pod87b575a2_4213_4c7a_aee0_4b5b182e931f.slice - libcontainer container kubepods-besteffort-pod87b575a2_4213_4c7a_aee0_4b5b182e931f.slice. Nov 6 00:26:57.992851 systemd[1]: Removed slice kubepods-burstable-podae91e7a6_99cf_410e_9207_a19ac2f02a4d.slice - libcontainer container kubepods-burstable-podae91e7a6_99cf_410e_9207_a19ac2f02a4d.slice. Nov 6 00:26:57.993456 systemd[1]: kubepods-burstable-podae91e7a6_99cf_410e_9207_a19ac2f02a4d.slice: Consumed 5.456s CPU time, 123.1M memory peak, 144K read from disk, 13.3M written to disk. Nov 6 00:26:57.997723 containerd[1693]: time="2025-11-06T00:26:57.997174558Z" level=info msg="RemoveContainer for \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" returns successfully" Nov 6 00:26:57.997799 kubelet[3152]: I1106 00:26:57.997596 3152 scope.go:117] "RemoveContainer" containerID="61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d" Nov 6 00:26:57.998421 containerd[1693]: time="2025-11-06T00:26:57.998379767Z" level=error msg="ContainerStatus for \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\": not found" Nov 6 00:26:57.998605 kubelet[3152]: E1106 00:26:57.998582 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\": not found" containerID="61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d" Nov 6 00:26:57.998654 kubelet[3152]: I1106 00:26:57.998610 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d"} err="failed to get container status \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"61fd4150b9ca797693b17c17822873de43411eaf1705ae0e0bd26ca6e206ba8d\": not found" Nov 6 00:26:57.998654 kubelet[3152]: I1106 00:26:57.998642 3152 scope.go:117] "RemoveContainer" containerID="b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3" Nov 6 00:26:58.000737 containerd[1693]: time="2025-11-06T00:26:58.000324384Z" level=info msg="RemoveContainer for \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\"" Nov 6 00:26:58.007482 containerd[1693]: time="2025-11-06T00:26:58.007451168Z" level=info msg="RemoveContainer for \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" returns successfully" Nov 6 00:26:58.007646 kubelet[3152]: I1106 00:26:58.007601 3152 scope.go:117] "RemoveContainer" containerID="b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46" Nov 6 00:26:58.009663 containerd[1693]: time="2025-11-06T00:26:58.009639821Z" level=info msg="RemoveContainer for \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\"" Nov 6 00:26:58.018446 containerd[1693]: time="2025-11-06T00:26:58.018374443Z" level=info msg="RemoveContainer for \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" returns successfully" Nov 6 00:26:58.020103 kubelet[3152]: I1106 00:26:58.019462 3152 scope.go:117] "RemoveContainer" containerID="2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e" Nov 6 00:26:58.024971 containerd[1693]: time="2025-11-06T00:26:58.024379111Z" level=info msg="RemoveContainer for \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\"" Nov 6 00:26:58.030788 containerd[1693]: time="2025-11-06T00:26:58.030764563Z" level=info msg="RemoveContainer for \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" returns successfully" Nov 6 00:26:58.032009 kubelet[3152]: I1106 00:26:58.031989 3152 scope.go:117] "RemoveContainer" containerID="d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5" Nov 6 00:26:58.033821 containerd[1693]: time="2025-11-06T00:26:58.033798967Z" level=info msg="RemoveContainer for \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\"" Nov 6 00:26:58.039888 containerd[1693]: time="2025-11-06T00:26:58.039862751Z" level=info msg="RemoveContainer for \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" returns successfully" Nov 6 00:26:58.040630 kubelet[3152]: I1106 00:26:58.040043 3152 scope.go:117] "RemoveContainer" containerID="7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c" Nov 6 00:26:58.041646 containerd[1693]: time="2025-11-06T00:26:58.041620769Z" level=info msg="RemoveContainer for \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\"" Nov 6 00:26:58.047099 containerd[1693]: time="2025-11-06T00:26:58.047075559Z" level=info msg="RemoveContainer for \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" returns successfully" Nov 6 00:26:58.047244 kubelet[3152]: I1106 00:26:58.047225 3152 scope.go:117] "RemoveContainer" containerID="b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3" Nov 6 00:26:58.047411 containerd[1693]: time="2025-11-06T00:26:58.047382160Z" level=error msg="ContainerStatus for \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\": not found" Nov 6 00:26:58.047520 kubelet[3152]: E1106 00:26:58.047501 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\": not found" containerID="b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3" Nov 6 00:26:58.047562 kubelet[3152]: I1106 00:26:58.047548 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3"} err="failed to get container status \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b40d0711fa4857a1d3a2a263186e231d8af46070cb45d6ed5862b8c6da71a0c3\": not found" Nov 6 00:26:58.047589 kubelet[3152]: I1106 00:26:58.047566 3152 scope.go:117] "RemoveContainer" containerID="b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46" Nov 6 00:26:58.047758 containerd[1693]: time="2025-11-06T00:26:58.047728879Z" level=error msg="ContainerStatus for \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\": not found" Nov 6 00:26:58.047871 kubelet[3152]: E1106 00:26:58.047854 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\": not found" containerID="b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46" Nov 6 00:26:58.047908 kubelet[3152]: I1106 00:26:58.047872 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46"} err="failed to get container status \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5d631ddd034ca327b2b8b743e426119aade920a1f6be5fa1fe8b8d13d74ee46\": not found" Nov 6 00:26:58.047908 kubelet[3152]: I1106 00:26:58.047886 3152 scope.go:117] "RemoveContainer" containerID="2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e" Nov 6 00:26:58.048116 containerd[1693]: time="2025-11-06T00:26:58.048094092Z" level=error msg="ContainerStatus for \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\": not found" Nov 6 00:26:58.048198 kubelet[3152]: E1106 00:26:58.048183 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\": not found" containerID="2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e" Nov 6 00:26:58.048247 kubelet[3152]: I1106 00:26:58.048201 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e"} err="failed to get container status \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d2463b4920a74110b3ceb63cbb8f38101a6148b44971ed9699ada620f6f8d0e\": not found" Nov 6 00:26:58.048247 kubelet[3152]: I1106 00:26:58.048227 3152 scope.go:117] "RemoveContainer" containerID="d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5" Nov 6 00:26:58.048403 containerd[1693]: time="2025-11-06T00:26:58.048379185Z" level=error msg="ContainerStatus for \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\": not found" Nov 6 00:26:58.048497 kubelet[3152]: E1106 00:26:58.048480 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\": not found" containerID="d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5" Nov 6 00:26:58.048527 kubelet[3152]: I1106 00:26:58.048498 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5"} err="failed to get container status \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d703a65cdf9f893d3504c44dcce8161fc5e66680dd3012c2600d21ab91df14e5\": not found" Nov 6 00:26:58.048527 kubelet[3152]: I1106 00:26:58.048511 3152 scope.go:117] "RemoveContainer" containerID="7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c" Nov 6 00:26:58.048703 containerd[1693]: time="2025-11-06T00:26:58.048637569Z" level=error msg="ContainerStatus for \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\": not found" Nov 6 00:26:58.048746 kubelet[3152]: E1106 00:26:58.048731 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\": not found" containerID="7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c" Nov 6 00:26:58.048772 kubelet[3152]: I1106 00:26:58.048748 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c"} err="failed to get container status \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ee709f55a526a5272839f6e672ac1209e2d7d6fdefedf26a20afd709d237f3c\": not found" Nov 6 00:26:58.588646 containerd[1693]: time="2025-11-06T00:26:58.588598084Z" level=info msg="StopPodSandbox for \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\"" Nov 6 00:26:58.589138 containerd[1693]: time="2025-11-06T00:26:58.589071078Z" level=info msg="TearDown network for sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" successfully" Nov 6 00:26:58.589138 containerd[1693]: time="2025-11-06T00:26:58.589093807Z" level=info msg="StopPodSandbox for \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" returns successfully" Nov 6 00:26:58.592118 containerd[1693]: time="2025-11-06T00:26:58.592057493Z" level=info msg="RemovePodSandbox for \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\"" Nov 6 00:26:58.592118 containerd[1693]: time="2025-11-06T00:26:58.592092459Z" level=info msg="Forcibly stopping sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\"" Nov 6 00:26:58.592371 containerd[1693]: time="2025-11-06T00:26:58.592358895Z" level=info msg="TearDown network for sandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" successfully" Nov 6 00:26:58.593765 containerd[1693]: time="2025-11-06T00:26:58.593653211Z" level=info msg="Ensure that sandbox a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3 in task-service has been cleanup successfully" Nov 6 00:26:58.602472 containerd[1693]: time="2025-11-06T00:26:58.602276804Z" level=info msg="RemovePodSandbox \"a59c91a07e027608dbc256ebde1e253287b83697844087e98e6ef5f699a007c3\" returns successfully" Nov 6 00:26:58.602606 kubelet[3152]: I1106 00:26:58.602556 3152 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b575a2-4213-4c7a-aee0-4b5b182e931f" path="/var/lib/kubelet/pods/87b575a2-4213-4c7a-aee0-4b5b182e931f/volumes" Nov 6 00:26:58.603203 kubelet[3152]: I1106 00:26:58.602933 3152 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae91e7a6-99cf-410e-9207-a19ac2f02a4d" path="/var/lib/kubelet/pods/ae91e7a6-99cf-410e-9207-a19ac2f02a4d/volumes" Nov 6 00:26:58.603270 containerd[1693]: time="2025-11-06T00:26:58.603025656Z" level=info msg="StopPodSandbox for \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\"" Nov 6 00:26:58.603270 containerd[1693]: time="2025-11-06T00:26:58.603128983Z" level=info msg="TearDown network for sandbox \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" successfully" Nov 6 00:26:58.603270 containerd[1693]: time="2025-11-06T00:26:58.603140775Z" level=info msg="StopPodSandbox for \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" returns successfully" Nov 6 00:26:58.603979 containerd[1693]: time="2025-11-06T00:26:58.603599172Z" level=info msg="RemovePodSandbox for \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\"" Nov 6 00:26:58.603979 containerd[1693]: time="2025-11-06T00:26:58.603623046Z" level=info msg="Forcibly stopping sandbox \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\"" Nov 6 00:26:58.603979 containerd[1693]: time="2025-11-06T00:26:58.603694910Z" level=info msg="TearDown network for sandbox \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" successfully" Nov 6 00:26:58.604496 containerd[1693]: time="2025-11-06T00:26:58.604477074Z" level=info msg="Ensure that sandbox 62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed in task-service has been cleanup successfully" Nov 6 00:26:58.640180 containerd[1693]: time="2025-11-06T00:26:58.640150281Z" level=info msg="RemovePodSandbox \"62ea2860e92e4ba8ab461286443480b219f8e77b9bb5a0d6814078ff52626fed\" returns successfully" Nov 6 00:26:58.676427 kubelet[3152]: E1106 00:26:58.676400 3152 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:26:58.938404 sshd[4706]: Connection closed by 10.200.16.10 port 59052 Nov 6 00:26:58.938859 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:58.943605 systemd[1]: sshd@21-10.200.8.40:22-10.200.16.10:59052.service: Deactivated successfully. Nov 6 00:26:58.947003 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:26:58.950013 systemd-logind[1676]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:26:58.951789 systemd-logind[1676]: Removed session 24. Nov 6 00:26:59.054727 systemd[1]: Started sshd@22-10.200.8.40:22-10.200.16.10:59064.service - OpenSSH per-connection server daemon (10.200.16.10:59064). Nov 6 00:26:59.684984 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 59064 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:59.686076 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:59.691188 systemd-logind[1676]: New session 25 of user core. Nov 6 00:26:59.699132 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:27:00.701345 systemd[1]: Created slice kubepods-burstable-pod68bc11d2_9d69_466d_9415_234ca7340a64.slice - libcontainer container kubepods-burstable-pod68bc11d2_9d69_466d_9415_234ca7340a64.slice. Nov 6 00:27:00.768574 kubelet[3152]: I1106 00:27:00.768164 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-bpf-maps\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.768574 kubelet[3152]: I1106 00:27:00.768198 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-host-proc-sys-net\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.768574 kubelet[3152]: I1106 00:27:00.768227 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-cilium-run\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.768574 kubelet[3152]: I1106 00:27:00.768244 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-cilium-cgroup\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.768574 kubelet[3152]: I1106 00:27:00.768262 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-cni-path\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.768574 kubelet[3152]: I1106 00:27:00.768281 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68bc11d2-9d69-466d-9415-234ca7340a64-cilium-ipsec-secrets\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769053 kubelet[3152]: I1106 00:27:00.768300 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-lib-modules\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769053 kubelet[3152]: I1106 00:27:00.768316 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68bc11d2-9d69-466d-9415-234ca7340a64-cilium-config-path\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769053 kubelet[3152]: I1106 00:27:00.768334 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68bc11d2-9d69-466d-9415-234ca7340a64-hubble-tls\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769053 kubelet[3152]: I1106 00:27:00.768351 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-hostproc\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769053 kubelet[3152]: I1106 00:27:00.768367 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-etc-cni-netd\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769053 kubelet[3152]: I1106 00:27:00.768384 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68bc11d2-9d69-466d-9415-234ca7340a64-clustermesh-secrets\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769191 kubelet[3152]: I1106 00:27:00.768399 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-host-proc-sys-kernel\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769191 kubelet[3152]: I1106 00:27:00.768422 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68bc11d2-9d69-466d-9415-234ca7340a64-xtables-lock\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.769191 kubelet[3152]: I1106 00:27:00.768441 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvjk7\" (UniqueName: \"kubernetes.io/projected/68bc11d2-9d69-466d-9415-234ca7340a64-kube-api-access-pvjk7\") pod \"cilium-cp7wz\" (UID: \"68bc11d2-9d69-466d-9415-234ca7340a64\") " pod="kube-system/cilium-cp7wz" Nov 6 00:27:00.783683 sshd[4865]: Connection closed by 10.200.16.10 port 59064 Nov 6 00:27:00.784244 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:00.787732 systemd[1]: sshd@22-10.200.8.40:22-10.200.16.10:59064.service: Deactivated successfully. Nov 6 00:27:00.789436 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:27:00.790627 systemd-logind[1676]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:27:00.791672 systemd-logind[1676]: Removed session 25. Nov 6 00:27:00.908597 systemd[1]: Started sshd@23-10.200.8.40:22-10.200.16.10:55436.service - OpenSSH per-connection server daemon (10.200.16.10:55436). Nov 6 00:27:01.011433 containerd[1693]: time="2025-11-06T00:27:01.011352068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cp7wz,Uid:68bc11d2-9d69-466d-9415-234ca7340a64,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:01.043848 containerd[1693]: time="2025-11-06T00:27:01.043814698Z" level=info msg="connecting to shim ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8" address="unix:///run/containerd/s/11ac65b31bccfc7ba8480b242ca49c27f437c9364b5a7e1d86a71cd6717e72a2" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:01.062101 systemd[1]: Started cri-containerd-ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8.scope - libcontainer container ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8. Nov 6 00:27:01.084132 containerd[1693]: time="2025-11-06T00:27:01.084097583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cp7wz,Uid:68bc11d2-9d69-466d-9415-234ca7340a64,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\"" Nov 6 00:27:01.090325 containerd[1693]: time="2025-11-06T00:27:01.090298286Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:27:01.111481 containerd[1693]: time="2025-11-06T00:27:01.111454613Z" level=info msg="Container 3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:01.122807 containerd[1693]: time="2025-11-06T00:27:01.122780004Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3\"" Nov 6 00:27:01.123235 containerd[1693]: time="2025-11-06T00:27:01.123217420Z" level=info msg="StartContainer for \"3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3\"" Nov 6 00:27:01.124154 containerd[1693]: time="2025-11-06T00:27:01.124048589Z" level=info msg="connecting to shim 3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3" address="unix:///run/containerd/s/11ac65b31bccfc7ba8480b242ca49c27f437c9364b5a7e1d86a71cd6717e72a2" protocol=ttrpc version=3 Nov 6 00:27:01.144079 systemd[1]: Started cri-containerd-3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3.scope - libcontainer container 3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3. Nov 6 00:27:01.170906 containerd[1693]: time="2025-11-06T00:27:01.170865864Z" level=info msg="StartContainer for \"3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3\" returns successfully" Nov 6 00:27:01.175493 systemd[1]: cri-containerd-3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3.scope: Deactivated successfully. Nov 6 00:27:01.178226 containerd[1693]: time="2025-11-06T00:27:01.178203444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3\" id:\"3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3\" pid:4940 exited_at:{seconds:1762388821 nanos:177897810}" Nov 6 00:27:01.178306 containerd[1693]: time="2025-11-06T00:27:01.178261423Z" level=info msg="received exit event container_id:\"3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3\" id:\"3ffeeddfd9139404323ab18704566d62db8cc548203e7292f90603b72b5c13f3\" pid:4940 exited_at:{seconds:1762388821 nanos:177897810}" Nov 6 00:27:01.540694 sshd[4881]: Accepted publickey for core from 10.200.16.10 port 55436 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:01.541791 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:01.545399 systemd-logind[1676]: New session 26 of user core. Nov 6 00:27:01.550084 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:27:01.674651 kubelet[3152]: I1106 00:27:01.674610 3152 setters.go:543] "Node became not ready" node="ci-4459.1.0-n-288d7afe99" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-06T00:27:01Z","lastTransitionTime":"2025-11-06T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 6 00:27:01.983794 sshd[4973]: Connection closed by 10.200.16.10 port 55436 Nov 6 00:27:01.984308 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:01.987104 systemd[1]: sshd@23-10.200.8.40:22-10.200.16.10:55436.service: Deactivated successfully. Nov 6 00:27:01.989155 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:27:01.992837 systemd-logind[1676]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:27:01.993798 systemd-logind[1676]: Removed session 26. Nov 6 00:27:02.008331 containerd[1693]: time="2025-11-06T00:27:02.008274018Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:27:02.029056 containerd[1693]: time="2025-11-06T00:27:02.028989879Z" level=info msg="Container af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:02.040794 containerd[1693]: time="2025-11-06T00:27:02.040755656Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b\"" Nov 6 00:27:02.041413 containerd[1693]: time="2025-11-06T00:27:02.041345624Z" level=info msg="StartContainer for \"af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b\"" Nov 6 00:27:02.042450 containerd[1693]: time="2025-11-06T00:27:02.042416896Z" level=info msg="connecting to shim af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b" address="unix:///run/containerd/s/11ac65b31bccfc7ba8480b242ca49c27f437c9364b5a7e1d86a71cd6717e72a2" protocol=ttrpc version=3 Nov 6 00:27:02.061113 systemd[1]: Started cri-containerd-af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b.scope - libcontainer container af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b. Nov 6 00:27:02.087484 containerd[1693]: time="2025-11-06T00:27:02.087457295Z" level=info msg="StartContainer for \"af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b\" returns successfully" Nov 6 00:27:02.091753 systemd[1]: cri-containerd-af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b.scope: Deactivated successfully. Nov 6 00:27:02.094646 systemd[1]: Started sshd@24-10.200.8.40:22-10.200.16.10:55452.service - OpenSSH per-connection server daemon (10.200.16.10:55452). Nov 6 00:27:02.097060 containerd[1693]: time="2025-11-06T00:27:02.096575639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b\" id:\"af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b\" pid:4993 exited_at:{seconds:1762388822 nanos:95428540}" Nov 6 00:27:02.097060 containerd[1693]: time="2025-11-06T00:27:02.096644167Z" level=info msg="received exit event container_id:\"af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b\" id:\"af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b\" pid:4993 exited_at:{seconds:1762388822 nanos:95428540}" Nov 6 00:27:02.117583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af60eacf2795ac04ad305ec89a6541182b9ce824ee67eaf80d16abe3f62c092b-rootfs.mount: Deactivated successfully. Nov 6 00:27:02.737319 sshd[5013]: Accepted publickey for core from 10.200.16.10 port 55452 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:02.738684 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:02.743390 systemd-logind[1676]: New session 27 of user core. Nov 6 00:27:02.754117 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:27:03.009456 containerd[1693]: time="2025-11-06T00:27:03.009353508Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:27:03.030869 containerd[1693]: time="2025-11-06T00:27:03.029003339Z" level=info msg="Container 953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:03.039471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121677606.mount: Deactivated successfully. Nov 6 00:27:03.054887 containerd[1693]: time="2025-11-06T00:27:03.054858821Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e\"" Nov 6 00:27:03.058269 containerd[1693]: time="2025-11-06T00:27:03.058096101Z" level=info msg="StartContainer for \"953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e\"" Nov 6 00:27:03.059747 containerd[1693]: time="2025-11-06T00:27:03.059716911Z" level=info msg="connecting to shim 953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e" address="unix:///run/containerd/s/11ac65b31bccfc7ba8480b242ca49c27f437c9364b5a7e1d86a71cd6717e72a2" protocol=ttrpc version=3 Nov 6 00:27:03.083139 systemd[1]: Started cri-containerd-953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e.scope - libcontainer container 953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e. Nov 6 00:27:03.144264 containerd[1693]: time="2025-11-06T00:27:03.144237299Z" level=info msg="StartContainer for \"953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e\" returns successfully" Nov 6 00:27:03.146155 systemd[1]: cri-containerd-953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e.scope: Deactivated successfully. Nov 6 00:27:03.147510 containerd[1693]: time="2025-11-06T00:27:03.147471686Z" level=info msg="received exit event container_id:\"953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e\" id:\"953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e\" pid:5046 exited_at:{seconds:1762388823 nanos:147128135}" Nov 6 00:27:03.147861 containerd[1693]: time="2025-11-06T00:27:03.147835741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e\" id:\"953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e\" pid:5046 exited_at:{seconds:1762388823 nanos:147128135}" Nov 6 00:27:03.167218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-953c09532200dd0197746f71678351e279ebfe13d3568095cc9557b14f88b62e-rootfs.mount: Deactivated successfully. Nov 6 00:27:03.677735 kubelet[3152]: E1106 00:27:03.677694 3152 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:27:04.017416 containerd[1693]: time="2025-11-06T00:27:04.017273813Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:27:04.035497 containerd[1693]: time="2025-11-06T00:27:04.035469281Z" level=info msg="Container 4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:04.040895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359216889.mount: Deactivated successfully. Nov 6 00:27:04.054220 containerd[1693]: time="2025-11-06T00:27:04.054195048Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842\"" Nov 6 00:27:04.054841 containerd[1693]: time="2025-11-06T00:27:04.054808446Z" level=info msg="StartContainer for \"4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842\"" Nov 6 00:27:04.055897 containerd[1693]: time="2025-11-06T00:27:04.055842941Z" level=info msg="connecting to shim 4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842" address="unix:///run/containerd/s/11ac65b31bccfc7ba8480b242ca49c27f437c9364b5a7e1d86a71cd6717e72a2" protocol=ttrpc version=3 Nov 6 00:27:04.077079 systemd[1]: Started cri-containerd-4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842.scope - libcontainer container 4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842. Nov 6 00:27:04.099663 systemd[1]: cri-containerd-4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842.scope: Deactivated successfully. Nov 6 00:27:04.101496 containerd[1693]: time="2025-11-06T00:27:04.101471117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842\" id:\"4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842\" pid:5088 exited_at:{seconds:1762388824 nanos:100570733}" Nov 6 00:27:04.102721 containerd[1693]: time="2025-11-06T00:27:04.102690849Z" level=info msg="received exit event container_id:\"4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842\" id:\"4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842\" pid:5088 exited_at:{seconds:1762388824 nanos:100570733}" Nov 6 00:27:04.104266 containerd[1693]: time="2025-11-06T00:27:04.104202196Z" level=info msg="StartContainer for \"4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842\" returns successfully" Nov 6 00:27:04.122008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4806e65759118700259b8955b313e86844d578c69689a84b4d48b8dab2748842-rootfs.mount: Deactivated successfully. Nov 6 00:27:05.019555 containerd[1693]: time="2025-11-06T00:27:05.019516966Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:27:05.043845 containerd[1693]: time="2025-11-06T00:27:05.043812564Z" level=info msg="Container 162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:05.057619 containerd[1693]: time="2025-11-06T00:27:05.057593443Z" level=info msg="CreateContainer within sandbox \"ccccd455f91296a42cb974b142971454428d5dc80487367860bf1eee0c3bb1e8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\"" Nov 6 00:27:05.058983 containerd[1693]: time="2025-11-06T00:27:05.058091627Z" level=info msg="StartContainer for \"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\"" Nov 6 00:27:05.058983 containerd[1693]: time="2025-11-06T00:27:05.058874923Z" level=info msg="connecting to shim 162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e" address="unix:///run/containerd/s/11ac65b31bccfc7ba8480b242ca49c27f437c9364b5a7e1d86a71cd6717e72a2" protocol=ttrpc version=3 Nov 6 00:27:05.079083 systemd[1]: Started cri-containerd-162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e.scope - libcontainer container 162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e. Nov 6 00:27:05.109786 containerd[1693]: time="2025-11-06T00:27:05.109727347Z" level=info msg="StartContainer for \"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\" returns successfully" Nov 6 00:27:05.168246 containerd[1693]: time="2025-11-06T00:27:05.168205143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\" id:\"ff820c0d8e353176fc5d645c03024018c2b8c17679c7b5011fbb2ff406f39a5e\" pid:5158 exited_at:{seconds:1762388825 nanos:167351717}" Nov 6 00:27:05.551996 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Nov 6 00:27:06.034671 kubelet[3152]: I1106 00:27:06.034615 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cp7wz" podStartSLOduration=6.034598591 podStartE2EDuration="6.034598591s" podCreationTimestamp="2025-11-06 00:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:06.033539644 +0000 UTC m=+187.540107448" watchObservedRunningTime="2025-11-06 00:27:06.034598591 +0000 UTC m=+187.541166395" Nov 6 00:27:07.451743 containerd[1693]: time="2025-11-06T00:27:07.451437236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\" id:\"4c6488b5509577483df214f88f5abd58d518f2df40cc10d3d9e196b979bf81f1\" pid:5353 exit_status:1 exited_at:{seconds:1762388827 nanos:450885758}" Nov 6 00:27:08.215916 systemd-networkd[1447]: lxc_health: Link UP Nov 6 00:27:08.224076 systemd-networkd[1447]: lxc_health: Gained carrier Nov 6 00:27:09.593884 containerd[1693]: time="2025-11-06T00:27:09.593559866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\" id:\"31b2f7812d4afb6c913637d236ca8cb961df018c73a404e03894785d1e941871\" pid:5721 exited_at:{seconds:1762388829 nanos:592906012}" Nov 6 00:27:09.597369 kubelet[3152]: E1106 00:27:09.597297 3152 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43604->127.0.0.1:42079: write tcp 127.0.0.1:43604->127.0.0.1:42079: write: broken pipe Nov 6 00:27:09.813091 systemd-networkd[1447]: lxc_health: Gained IPv6LL Nov 6 00:27:11.693274 containerd[1693]: time="2025-11-06T00:27:11.693226010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\" id:\"4dcce69153a524f854286a1ebc0077d6b51a271692e322b94d14e0ed6245717a\" pid:5754 exited_at:{seconds:1762388831 nanos:692155902}" Nov 6 00:27:13.771628 containerd[1693]: time="2025-11-06T00:27:13.771585442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"162db090e46f170e868b0d33ced86af2c543f2052c87a913d5d602e8fc567d8e\" id:\"af5e4b82774a7aeeac5757b71cff0e1999c63314ee7d8f382ccad5b9f4e363cb\" pid:5780 exited_at:{seconds:1762388833 nanos:771091022}" Nov 6 00:27:13.892681 sshd[5028]: Connection closed by 10.200.16.10 port 55452 Nov 6 00:27:13.893358 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:13.896966 systemd[1]: sshd@24-10.200.8.40:22-10.200.16.10:55452.service: Deactivated successfully. Nov 6 00:27:13.898848 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:27:13.899890 systemd-logind[1676]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:27:13.901397 systemd-logind[1676]: Removed session 27.