Dec 16 13:04:54.973213 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:04:54.973238 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:54.973250 kernel: BIOS-provided physical RAM map: Dec 16 13:04:54.973454 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:04:54.973462 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:04:54.973468 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:04:54.973475 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:04:54.973481 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:04:54.973486 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:04:54.973494 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:04:54.973500 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:04:54.973507 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:04:54.973513 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:04:54.973519 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:04:54.973527 kernel: NX (Execute Disable) protection: active Dec 16 13:04:54.973535 kernel: APIC: Static calls initialized Dec 16 13:04:54.973541 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:04:54.973548 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eac2298 RNG=0x3ffd2018 Dec 16 13:04:54.973554 kernel: random: crng init done Dec 16 13:04:54.973560 kernel: secureboot: Secure boot disabled Dec 16 13:04:54.973566 kernel: SMBIOS 3.1.0 present. Dec 16 13:04:54.973572 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:04:54.973579 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:04:54.973584 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:04:54.973591 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:04:54.973597 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:04:54.973605 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:04:54.973612 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:04:54.973619 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:54.973625 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:54.973632 kernel: tsc: Detected 2300.001 MHz processor Dec 16 13:04:54.973639 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:04:54.973646 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:04:54.973652 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:04:54.973659 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:04:54.973666 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:04:54.973674 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:04:54.973680 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:04:54.973686 kernel: Using GB pages for direct mapping Dec 16 13:04:54.973694 kernel: ACPI: Early table checksum verification disabled Dec 16 13:04:54.973704 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:04:54.973711 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.973719 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.973726 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:04:54.973732 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:04:54.973739 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.973746 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.973753 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.973760 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:54.973769 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:54.973776 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.973783 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:04:54.973790 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:04:54.973796 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:04:54.973803 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:04:54.973810 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:04:54.973816 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:04:54.973857 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:04:54.973867 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:04:54.973873 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:04:54.973880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:04:54.973887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:04:54.973894 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:04:54.973901 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 13:04:54.973909 kernel: Zone ranges: Dec 16 13:04:54.973916 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:04:54.973923 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:04:54.973932 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:54.973938 kernel: Device empty Dec 16 13:04:54.973945 kernel: Movable zone start for each node Dec 16 13:04:54.973952 kernel: Early memory node ranges Dec 16 13:04:54.973958 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:04:54.973965 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:04:54.973971 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:04:54.973977 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:04:54.973984 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:54.973992 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:04:54.973999 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:04:54.974005 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:04:54.974013 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:04:54.974020 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:04:54.974027 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:04:54.974034 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:04:54.974041 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:04:54.974048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:04:54.974056 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:04:54.974062 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:04:54.974069 kernel: TSC deadline timer available Dec 16 13:04:54.974076 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:04:54.974083 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:04:54.974090 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:04:54.974097 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:04:54.974104 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:04:54.974112 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:04:54.974118 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:04:54.974126 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:04:54.974132 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:04:54.974139 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:04:54.974146 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:04:54.974154 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:04:54.974161 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:04:54.974168 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:04:54.974175 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:04:54.974183 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:04:54.974191 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:54.974198 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:04:54.974205 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:04:54.974212 kernel: Fallback order for Node 0: 0 Dec 16 13:04:54.974219 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:04:54.974226 kernel: Policy zone: Normal Dec 16 13:04:54.974234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:04:54.974241 kernel: software IO TLB: area num 2. Dec 16 13:04:54.974249 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:04:54.974255 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:04:54.974262 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:04:54.974268 kernel: Dynamic Preempt: voluntary Dec 16 13:04:54.974275 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:04:54.974284 kernel: rcu: RCU event tracing is enabled. Dec 16 13:04:54.974297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:04:54.974306 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:04:54.974314 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:04:54.974321 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:04:54.974328 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:04:54.974337 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:04:54.974345 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:54.974353 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:54.974361 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:54.974369 kernel: Using NULL legacy PIC Dec 16 13:04:54.974378 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:04:54.974386 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:04:54.974393 kernel: Console: colour dummy device 80x25 Dec 16 13:04:54.974400 kernel: printk: legacy console [tty1] enabled Dec 16 13:04:54.974408 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:04:54.974416 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:04:54.974423 kernel: ACPI: Core revision 20240827 Dec 16 13:04:54.974431 kernel: Failed to register legacy timer interrupt Dec 16 13:04:54.974439 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:04:54.974447 kernel: x2apic enabled Dec 16 13:04:54.974454 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:04:54.974461 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:04:54.974469 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:04:54.974476 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:04:54.974484 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:04:54.974492 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:04:54.974500 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:04:54.974508 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:04:54.974516 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:04:54.974523 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:04:54.974531 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:04:54.974538 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Dec 16 13:04:54.974546 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300001) Dec 16 13:04:54.974554 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:04:54.974562 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:04:54.974570 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:04:54.974577 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:04:54.974586 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:04:54.974593 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:04:54.974600 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:04:54.974608 kernel: RETBleed: Vulnerable Dec 16 13:04:54.974615 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:04:54.974623 kernel: active return thunk: its_return_thunk Dec 16 13:04:54.974630 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:04:54.974638 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:04:54.974645 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:04:54.974652 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:04:54.974659 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:04:54.974667 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:04:54.974675 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:04:54.974683 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:04:54.974690 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:04:54.974698 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:04:54.974705 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:04:54.974712 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:04:54.974719 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:04:54.974726 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:04:54.974733 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:04:54.974741 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:04:54.974749 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:04:54.974757 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:04:54.974765 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:04:54.974772 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:04:54.974779 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:04:54.974786 kernel: landlock: Up and running. Dec 16 13:04:54.974793 kernel: SELinux: Initializing. Dec 16 13:04:54.974800 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.974807 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.974815 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:04:54.974836 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:04:54.974843 kernel: signal: max sigframe size: 11952 Dec 16 13:04:54.974851 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:04:54.974859 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:04:54.974866 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:04:54.974874 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:04:54.974882 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:04:54.974889 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:04:54.974897 kernel: .... node #0, CPUs: #1 Dec 16 13:04:54.974905 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:04:54.974913 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Dec 16 13:04:54.974924 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 308180K reserved, 0K cma-reserved) Dec 16 13:04:54.974933 kernel: devtmpfs: initialized Dec 16 13:04:54.974942 kernel: x86/mm: Memory block size: 128MB Dec 16 13:04:54.974949 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:04:54.974957 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:04:54.974965 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:04:54.974974 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:04:54.974982 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:04:54.974991 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:04:54.975001 kernel: audit: type=2000 audit(1765890291.080:1): state=initialized audit_enabled=0 res=1 Dec 16 13:04:54.975009 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:04:54.975018 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:04:54.975026 kernel: cpuidle: using governor menu Dec 16 13:04:54.975034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:04:54.975041 kernel: dca service started, version 1.12.1 Dec 16 13:04:54.975049 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:04:54.975057 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:04:54.975066 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:04:54.975075 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:04:54.975083 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:04:54.975092 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:04:54.975100 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:04:54.975108 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:04:54.975116 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:04:54.975123 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:04:54.975131 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:04:54.975141 kernel: ACPI: Interpreter enabled Dec 16 13:04:54.975150 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:04:54.975158 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:04:54.975167 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:04:54.975175 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:04:54.975183 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:04:54.975191 kernel: iommu: Default domain type: Translated Dec 16 13:04:54.975198 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:04:54.975205 kernel: efivars: Registered efivars operations Dec 16 13:04:54.975211 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:04:54.975216 kernel: PCI: System does not support PCI Dec 16 13:04:54.975221 kernel: vgaarb: loaded Dec 16 13:04:54.975226 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:04:54.975231 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:04:54.975236 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:04:54.975241 kernel: pnp: PnP ACPI init Dec 16 13:04:54.975246 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:04:54.975252 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:04:54.975261 kernel: NET: Registered PF_INET protocol family Dec 16 13:04:54.975271 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:04:54.975278 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:04:54.975283 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:04:54.975289 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:04:54.975294 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:04:54.975302 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:04:54.975313 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.975321 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.975329 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:04:54.975334 kernel: NET: Registered PF_XDP protocol family Dec 16 13:04:54.975340 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:04:54.975345 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:04:54.975355 kernel: software IO TLB: mapped [mem 0x000000003a9a9000-0x000000003e9a9000] (64MB) Dec 16 13:04:54.975364 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:04:54.975373 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:04:54.975381 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Dec 16 13:04:54.975389 kernel: clocksource: Switched to clocksource tsc Dec 16 13:04:54.975395 kernel: Initialise system trusted keyrings Dec 16 13:04:54.975400 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:04:54.975407 kernel: Key type asymmetric registered Dec 16 13:04:54.975415 kernel: Asymmetric key parser 'x509' registered Dec 16 13:04:54.975423 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:04:54.975432 kernel: io scheduler mq-deadline registered Dec 16 13:04:54.975439 kernel: io scheduler kyber registered Dec 16 13:04:54.975445 kernel: io scheduler bfq registered Dec 16 13:04:54.975451 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:04:54.975457 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:04:54.975465 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:54.975475 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:04:54.975482 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:54.975488 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:04:54.975607 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:04:54.975678 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:04:54 UTC (1765890294) Dec 16 13:04:54.975742 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:04:54.975750 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:04:54.975756 kernel: efifb: probing for efifb Dec 16 13:04:54.975764 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:04:54.975773 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:04:54.975781 kernel: efifb: scrolling: redraw Dec 16 13:04:54.975790 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:04:54.975796 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:04:54.975801 kernel: fb0: EFI VGA frame buffer device Dec 16 13:04:54.975806 kernel: pstore: Using crash dump compression: deflate Dec 16 13:04:54.975815 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:04:54.975836 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:04:54.975844 kernel: Segment Routing with IPv6 Dec 16 13:04:54.975849 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:04:54.975854 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:04:54.975860 kernel: Key type dns_resolver registered Dec 16 13:04:54.975868 kernel: IPI shorthand broadcast: enabled Dec 16 13:04:54.975877 kernel: sched_clock: Marking stable (3047004430, 97844799)->(3498176493, -353327264) Dec 16 13:04:54.975885 kernel: registered taskstats version 1 Dec 16 13:04:54.975892 kernel: Loading compiled-in X.509 certificates Dec 16 13:04:54.975897 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:04:54.975903 kernel: Demotion targets for Node 0: null Dec 16 13:04:54.975912 kernel: Key type .fscrypt registered Dec 16 13:04:54.975920 kernel: Key type fscrypt-provisioning registered Dec 16 13:04:54.975927 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:04:54.975932 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:04:54.975937 kernel: ima: No architecture policies found Dec 16 13:04:54.975942 kernel: clk: Disabling unused clocks Dec 16 13:04:54.975953 kernel: Warning: unable to open an initial console. Dec 16 13:04:54.975962 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:04:54.975971 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:04:54.975977 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:04:54.975982 kernel: Run /init as init process Dec 16 13:04:54.975987 kernel: with arguments: Dec 16 13:04:54.975993 kernel: /init Dec 16 13:04:54.976003 kernel: with environment: Dec 16 13:04:54.976011 kernel: HOME=/ Dec 16 13:04:54.976020 kernel: TERM=linux Dec 16 13:04:54.976026 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:04:54.976034 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:04:54.976044 systemd[1]: Detected virtualization microsoft. Dec 16 13:04:54.976053 systemd[1]: Detected architecture x86-64. Dec 16 13:04:54.976062 systemd[1]: Running in initrd. Dec 16 13:04:54.976068 systemd[1]: No hostname configured, using default hostname. Dec 16 13:04:54.976075 systemd[1]: Hostname set to . Dec 16 13:04:54.976080 systemd[1]: Initializing machine ID from random generator. Dec 16 13:04:54.976090 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:04:54.976098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:54.976107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:54.976113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:04:54.976119 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:04:54.976125 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:04:54.976137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:04:54.976148 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:04:54.976157 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:04:54.976163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:54.976169 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:54.976175 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:54.976183 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:04:54.976196 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:04:54.976205 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:54.976213 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:04:54.976219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:04:54.976225 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:04:54.976230 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:04:54.976238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:54.976248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:54.976257 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:54.976269 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:54.976279 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:04:54.976286 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:04:54.976291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:04:54.976297 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:04:54.976304 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:04:54.976313 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:04:54.976322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:04:54.976336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:54.976343 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:04:54.976369 systemd-journald[186]: Collecting audit messages is disabled. Dec 16 13:04:54.976385 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:54.976392 systemd-journald[186]: Journal started Dec 16 13:04:54.976409 systemd-journald[186]: Runtime Journal (/run/log/journal/413cfcf57af347a9b4233dff5ed52df2) is 8M, max 158.6M, 150.6M free. Dec 16 13:04:54.977885 systemd-modules-load[187]: Inserted module 'overlay' Dec 16 13:04:54.987870 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:04:54.988552 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:04:54.991202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:04:54.993173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:04:55.002998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.008382 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:04:55.012170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:04:55.021078 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:55.036927 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:04:55.023575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:55.039332 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 16 13:04:55.041897 kernel: Bridge firewalling registered Dec 16 13:04:55.043621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:55.048723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:04:55.052970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:04:55.062981 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:55.065925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:04:55.072050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:04:55.076946 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:04:55.082099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:55.096753 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:55.126094 systemd-resolved[220]: Positive Trust Anchors: Dec 16 13:04:55.126107 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:55.126144 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:55.143993 systemd-resolved[220]: Defaulting to hostname 'linux'. Dec 16 13:04:55.146563 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:55.152904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:55.171839 kernel: SCSI subsystem initialized Dec 16 13:04:55.179841 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:04:55.188841 kernel: iscsi: registered transport (tcp) Dec 16 13:04:55.206851 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:04:55.206888 kernel: QLogic iSCSI HBA Driver Dec 16 13:04:55.220695 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:04:55.238507 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:55.243339 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:04:55.275188 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:04:55.278942 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:04:55.329842 kernel: raid6: avx512x4 gen() 44375 MB/s Dec 16 13:04:55.346835 kernel: raid6: avx512x2 gen() 43193 MB/s Dec 16 13:04:55.363834 kernel: raid6: avx512x1 gen() 26356 MB/s Dec 16 13:04:55.381835 kernel: raid6: avx2x4 gen() 35736 MB/s Dec 16 13:04:55.399834 kernel: raid6: avx2x2 gen() 37264 MB/s Dec 16 13:04:55.417958 kernel: raid6: avx2x1 gen() 31020 MB/s Dec 16 13:04:55.417972 kernel: raid6: using algorithm avx512x4 gen() 44375 MB/s Dec 16 13:04:55.437903 kernel: raid6: .... xor() 7297 MB/s, rmw enabled Dec 16 13:04:55.437918 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:04:55.457848 kernel: xor: automatically using best checksumming function avx Dec 16 13:04:55.577848 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:04:55.582462 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:04:55.585970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:55.601975 systemd-udevd[435]: Using default interface naming scheme 'v255'. Dec 16 13:04:55.605885 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:55.610820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:04:55.633266 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Dec 16 13:04:55.651394 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:04:55.656704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:04:55.687203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:55.694510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:04:55.735850 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:04:55.762839 kernel: AES CTR mode by8 optimization enabled Dec 16 13:04:55.765618 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:04:55.770843 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:04:55.770877 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:04:55.770321 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:55.778864 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:04:55.770481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.781105 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:55.787847 kernel: PTP clock support registered Dec 16 13:04:55.788112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:55.799995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:55.806079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.814928 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 16 13:04:55.813934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:55.819796 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:04:55.820025 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:04:55.820091 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:04:55.825097 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:04:55.828655 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:04:55.828687 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:04:55.599881 systemd-resolved[220]: Clock change detected. Flushing caches. Dec 16 13:04:55.607458 systemd-journald[186]: Time jumped backwards, rotating. Dec 16 13:04:55.607506 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:04:55.612247 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd8ca0f7 (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:04:55.615200 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:04:55.635441 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:04:55.645995 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 16 13:04:55.653145 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:04:55.653597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.659085 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:55.665507 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:04:55.665707 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:04:55.667312 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:55.668703 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:04:55.674204 kernel: scsi host0: storvsc_host_t Dec 16 13:04:55.674342 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:04:55.674362 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:04:55.677205 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:04:55.687660 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:04:55.687810 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:04:55.689203 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:04:55.694876 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:55.695002 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:04:55.705203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#210 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:55.714072 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:04:55.714286 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:55.728205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:55.881289 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:04:55.888203 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:56.203216 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:04:56.420093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:04:56.429447 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:04:56.478250 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:04:56.483417 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:04:56.491830 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:56.492401 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:56.501349 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:04:56.504878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:56.508227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:04:56.511679 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:04:56.526361 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:04:56.547896 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:56.547631 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:04:56.644214 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:56.651212 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:04:56.656201 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:04:56.661202 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:56.690890 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:04:56.690948 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:04:56.690966 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:04:56.690981 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:04:56.709624 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:56.709776 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:04:56.709919 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:04:56.713888 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:56.725200 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:04:56.728520 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd8ca0f7 eth0: VF registering: eth1 Dec 16 13:04:56.728882 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:04:56.732213 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:04:57.563102 disk-uuid[650]: The operation has completed successfully. Dec 16 13:04:57.566362 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:57.626434 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:04:57.626524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:04:57.657993 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:04:57.675357 sh[693]: Success Dec 16 13:04:57.706543 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:04:57.706589 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:04:57.708196 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:04:57.717280 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:04:58.018486 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:04:58.025274 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:04:58.042103 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:04:58.055233 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (706) Dec 16 13:04:58.058452 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:04:58.058482 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:58.379441 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:04:58.379608 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:04:58.380584 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:04:58.414081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:04:58.417661 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:04:58.420381 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:04:58.421032 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:04:58.438418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:04:58.457434 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (729) Dec 16 13:04:58.461712 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:58.461753 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:58.483599 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:58.483633 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:58.483643 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:58.491233 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:58.491241 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:04:58.494379 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:04:58.529934 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:04:58.534363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:04:58.566531 systemd-networkd[875]: lo: Link UP Dec 16 13:04:58.566540 systemd-networkd[875]: lo: Gained carrier Dec 16 13:04:58.568803 systemd-networkd[875]: Enumeration completed Dec 16 13:04:58.569165 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:58.580240 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:04:58.569168 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:58.569251 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:04:58.571358 systemd[1]: Reached target network.target - Network. Dec 16 13:04:58.598210 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:58.603464 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd8ca0f7 eth0: Data path switched to VF: enP30832s1 Dec 16 13:04:58.603206 systemd-networkd[875]: enP30832s1: Link UP Dec 16 13:04:58.603277 systemd-networkd[875]: eth0: Link UP Dec 16 13:04:58.603378 systemd-networkd[875]: eth0: Gained carrier Dec 16 13:04:58.603389 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:58.610811 systemd-networkd[875]: enP30832s1: Gained carrier Dec 16 13:04:58.624216 systemd-networkd[875]: eth0: DHCPv4 address 10.200.0.34/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:04:59.603697 ignition[812]: Ignition 2.22.0 Dec 16 13:04:59.603709 ignition[812]: Stage: fetch-offline Dec 16 13:04:59.606480 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:04:59.603814 ignition[812]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:59.611932 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:04:59.603821 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:59.603907 ignition[812]: parsed url from cmdline: "" Dec 16 13:04:59.603910 ignition[812]: no config URL provided Dec 16 13:04:59.603915 ignition[812]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:04:59.603920 ignition[812]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:04:59.603924 ignition[812]: failed to fetch config: resource requires networking Dec 16 13:04:59.605122 ignition[812]: Ignition finished successfully Dec 16 13:04:59.640144 ignition[884]: Ignition 2.22.0 Dec 16 13:04:59.640155 ignition[884]: Stage: fetch Dec 16 13:04:59.640392 ignition[884]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:59.640400 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:59.640476 ignition[884]: parsed url from cmdline: "" Dec 16 13:04:59.640480 ignition[884]: no config URL provided Dec 16 13:04:59.640490 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:04:59.640496 ignition[884]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:04:59.640518 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:04:59.712083 ignition[884]: GET result: OK Dec 16 13:04:59.712171 ignition[884]: config has been read from IMDS userdata Dec 16 13:04:59.712208 ignition[884]: parsing config with SHA512: 24596647c6cb18f63458fe7f9364474b5b4829a04d8cbff868dedd222a3dbd030c0b8b2ef442151cc227cbdd7246654b8dbe83e10e2e765ea06623e62d71a090 Dec 16 13:04:59.717132 unknown[884]: fetched base config from "system" Dec 16 13:04:59.719282 unknown[884]: fetched base config from "system" Dec 16 13:04:59.719308 unknown[884]: fetched user config from "azure" Dec 16 13:04:59.723442 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:04:59.721141 ignition[884]: fetch: fetch complete Dec 16 13:04:59.726381 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:04:59.721146 ignition[884]: fetch: fetch passed Dec 16 13:04:59.721223 ignition[884]: Ignition finished successfully Dec 16 13:04:59.751876 ignition[890]: Ignition 2.22.0 Dec 16 13:04:59.751887 ignition[890]: Stage: kargs Dec 16 13:04:59.752424 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:59.752438 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:59.756092 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:04:59.753224 ignition[890]: kargs: kargs passed Dec 16 13:04:59.759240 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:04:59.753275 ignition[890]: Ignition finished successfully Dec 16 13:04:59.783167 ignition[897]: Ignition 2.22.0 Dec 16 13:04:59.783177 ignition[897]: Stage: disks Dec 16 13:04:59.783421 ignition[897]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:59.783429 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:59.784328 ignition[897]: disks: disks passed Dec 16 13:04:59.784361 ignition[897]: Ignition finished successfully Dec 16 13:04:59.789260 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:04:59.790953 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:04:59.800257 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:04:59.804043 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:04:59.807241 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:04:59.808707 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:04:59.809541 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:04:59.925018 systemd-fsck[905]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 13:04:59.928974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:04:59.934619 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:05:00.221205 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:05:00.221464 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:05:00.225659 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:05:00.245575 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:05:00.251267 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:05:00.259323 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:05:00.267297 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:05:00.274090 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (914) Dec 16 13:05:00.274113 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:00.267856 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:05:00.276822 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:00.281091 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:05:00.283593 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:05:00.289959 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:05:00.289992 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:05:00.291533 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:05:00.292632 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:05:00.364471 systemd-networkd[875]: eth0: Gained IPv6LL Dec 16 13:05:00.931870 coreos-metadata[916]: Dec 16 13:05:00.931 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:05:00.934309 coreos-metadata[916]: Dec 16 13:05:00.933 INFO Fetch successful Dec 16 13:05:00.934309 coreos-metadata[916]: Dec 16 13:05:00.933 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:05:00.942181 coreos-metadata[916]: Dec 16 13:05:00.942 INFO Fetch successful Dec 16 13:05:00.958194 coreos-metadata[916]: Dec 16 13:05:00.958 INFO wrote hostname ci-4459.2.2-a-d193ee2782 to /sysroot/etc/hostname Dec 16 13:05:00.961496 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:05:01.128329 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:05:01.164683 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:05:01.184726 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:05:01.189312 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:05:02.158300 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:05:02.164635 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:05:02.179385 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:05:02.188037 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:05:02.192962 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:02.217787 ignition[1033]: INFO : Ignition 2.22.0 Dec 16 13:05:02.217787 ignition[1033]: INFO : Stage: mount Dec 16 13:05:02.217787 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:02.217787 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:02.220239 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:05:02.236503 ignition[1033]: INFO : mount: mount passed Dec 16 13:05:02.236503 ignition[1033]: INFO : Ignition finished successfully Dec 16 13:05:02.221904 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:05:02.225275 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:05:02.244204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:05:02.259207 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1045) Dec 16 13:05:02.262209 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:02.262300 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:02.267029 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:05:02.267062 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:05:02.268440 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:05:02.270283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:05:02.294204 ignition[1061]: INFO : Ignition 2.22.0 Dec 16 13:05:02.294204 ignition[1061]: INFO : Stage: files Dec 16 13:05:02.296558 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:02.296558 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:02.296558 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:05:02.304236 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:05:02.304236 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:05:02.366702 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:05:02.368968 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:05:02.368968 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:05:02.367039 unknown[1061]: wrote ssh authorized keys file for user: core Dec 16 13:05:02.384592 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:05:02.388265 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:05:02.434620 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:05:02.742820 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:05:02.742820 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:05:02.748694 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:05:03.009749 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:05:03.060272 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:05:03.064312 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:05:03.091232 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:05:03.091232 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:05:03.091232 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:05:03.091232 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:05:03.091232 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:05:03.091232 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:05:03.580875 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:05:07.202474 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:05:07.202474 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:05:07.263132 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:05:07.277508 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:05:07.277508 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:05:07.277508 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:05:07.283231 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:05:07.283231 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:05:07.290271 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:05:07.290271 ignition[1061]: INFO : files: files passed Dec 16 13:05:07.290271 ignition[1061]: INFO : Ignition finished successfully Dec 16 13:05:07.288079 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:05:07.294526 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:05:07.305330 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:05:07.310039 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:05:07.315299 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:05:07.323404 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:07.326016 initrd-setup-root-after-ignition[1092]: grep: Dec 16 13:05:07.326016 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:07.334070 initrd-setup-root-after-ignition[1092]: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:07.327124 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:05:07.332547 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:05:07.335850 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:05:07.381427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:05:07.381510 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:05:07.384917 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:05:07.389255 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:05:07.392134 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:05:07.392796 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:05:07.416237 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:05:07.419315 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:05:07.449963 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:05:07.450437 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:05:07.450665 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:05:07.454825 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:05:07.454962 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:05:07.461300 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:05:07.463825 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:05:07.467050 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:05:07.468782 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:05:07.469345 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:05:07.469645 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:05:07.469983 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:05:07.476784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:05:07.487164 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:05:07.495331 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:05:07.496082 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:05:07.496421 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:05:07.496533 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:05:07.504887 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:05:07.507560 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:05:07.516280 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:05:07.517032 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:05:07.520314 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:05:07.520437 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:05:07.525595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:05:07.525698 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:05:07.528607 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:05:07.528723 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:05:07.531591 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:05:07.531696 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:05:07.537009 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:05:07.554365 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:05:07.558727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:05:07.558931 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:05:07.566028 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:05:07.566118 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:05:07.574706 ignition[1116]: INFO : Ignition 2.22.0 Dec 16 13:05:07.574706 ignition[1116]: INFO : Stage: umount Dec 16 13:05:07.582672 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:07.582672 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:07.582672 ignition[1116]: INFO : umount: umount passed Dec 16 13:05:07.582672 ignition[1116]: INFO : Ignition finished successfully Dec 16 13:05:07.576305 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:05:07.577083 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:05:07.578687 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:05:07.578769 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:05:07.581091 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:05:07.581166 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:05:07.581439 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:05:07.581473 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:05:07.581727 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:05:07.581758 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:05:07.582051 systemd[1]: Stopped target network.target - Network. Dec 16 13:05:07.582083 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:05:07.582117 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:05:07.582603 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:05:07.582622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:05:07.586666 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:05:07.596404 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:05:07.596429 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:05:07.596721 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:05:07.596756 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:05:07.597005 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:05:07.597029 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:05:07.597291 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:05:07.597329 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:05:07.597585 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:05:07.597614 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:05:07.597815 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:05:07.597977 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:05:07.642686 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:05:07.642778 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:05:07.648800 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:05:07.648975 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:05:07.649077 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:05:07.655663 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:05:07.656508 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:05:07.657627 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:05:07.657667 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:05:07.690323 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd8ca0f7 eth0: Data path switched from VF: enP30832s1 Dec 16 13:05:07.690484 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:05:07.659259 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:05:07.659365 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:05:07.659407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:05:07.659727 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:05:07.659756 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:07.663277 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:05:07.663322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:05:07.663504 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:05:07.663536 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:05:07.664692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:05:07.666214 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:05:07.666268 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:05:07.678444 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:05:07.678583 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:05:07.686607 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:05:07.688546 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:05:07.688606 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:05:07.691596 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:05:07.691630 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:05:07.697262 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:05:07.697308 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:05:07.700611 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:05:07.700662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:05:07.706339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:05:07.706392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:05:07.711060 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:05:07.711087 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:05:07.711127 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:05:07.713302 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:05:07.713354 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:05:07.714164 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:05:07.714214 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:05:07.716640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:05:07.716687 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:05:07.716910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:07.716940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:07.719158 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:05:07.719217 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:05:07.719244 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:05:07.719268 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:05:07.719504 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:05:07.720591 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:05:07.731515 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:05:07.731582 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:05:07.734388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:05:07.735789 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:05:07.740818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:05:07.744609 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:05:07.744661 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:05:07.752448 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:05:07.801312 systemd[1]: Switching root. Dec 16 13:05:07.872176 systemd-journald[186]: Journal stopped Dec 16 13:05:12.166145 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 16 13:05:12.166176 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:05:12.166203 kernel: SELinux: policy capability open_perms=1 Dec 16 13:05:12.166213 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:05:12.166222 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:05:12.166232 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:05:12.166242 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:05:12.166251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:05:12.166261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:05:12.166270 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:05:12.166279 kernel: audit: type=1403 audit(1765890309.159:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:05:12.166290 systemd[1]: Successfully loaded SELinux policy in 208.387ms. Dec 16 13:05:12.166302 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.033ms. Dec 16 13:05:12.166314 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:05:12.166327 systemd[1]: Detected virtualization microsoft. Dec 16 13:05:12.166336 systemd[1]: Detected architecture x86-64. Dec 16 13:05:12.166346 systemd[1]: Detected first boot. Dec 16 13:05:12.166356 systemd[1]: Hostname set to . Dec 16 13:05:12.166368 systemd[1]: Initializing machine ID from random generator. Dec 16 13:05:12.166379 zram_generator::config[1159]: No configuration found. Dec 16 13:05:12.166392 kernel: Guest personality initialized and is inactive Dec 16 13:05:12.166401 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:05:12.166411 kernel: Initialized host personality Dec 16 13:05:12.166419 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:05:12.166429 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:05:12.166439 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:05:12.166449 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:05:12.166459 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:05:12.166472 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:05:12.166482 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:05:12.166493 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:05:12.166502 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:05:12.166512 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:05:12.166521 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:05:12.166532 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:05:12.166544 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:05:12.166555 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:05:12.166565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:05:12.166575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:05:12.166586 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:05:12.166598 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:05:12.166610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:05:12.166621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:05:12.166634 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:05:12.166645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:05:12.166655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:05:12.166664 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:05:12.166674 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:05:12.166684 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:05:12.166695 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:05:12.166708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:05:12.166718 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:05:12.166729 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:05:12.166739 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:05:12.166749 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:05:12.166759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:05:12.166772 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:05:12.166783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:05:12.166793 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:05:12.166804 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:05:12.166815 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:05:12.166825 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:05:12.166835 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:05:12.166847 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:05:12.166858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:12.166869 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:05:12.166880 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:05:12.166890 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:05:12.166900 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:05:12.166910 systemd[1]: Reached target machines.target - Containers. Dec 16 13:05:12.166920 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:05:12.166931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:12.166944 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:05:12.166955 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:05:12.166965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:05:12.166975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:05:12.166985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:05:12.166995 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:05:12.167006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:05:12.167017 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:05:12.167030 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:05:12.167042 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:05:12.167052 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:05:12.167062 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:05:12.167073 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:12.167084 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:05:12.167094 kernel: fuse: init (API version 7.41) Dec 16 13:05:12.167104 kernel: loop: module loaded Dec 16 13:05:12.167117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:05:12.167127 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:05:12.167137 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:05:12.167147 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:05:12.167157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:05:12.167168 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:05:12.167179 systemd[1]: Stopped verity-setup.service. Dec 16 13:05:12.167214 systemd-journald[1253]: Collecting audit messages is disabled. Dec 16 13:05:12.167241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:12.167252 systemd-journald[1253]: Journal started Dec 16 13:05:12.167277 systemd-journald[1253]: Runtime Journal (/run/log/journal/a9e551fee37f45d886bf548e1d1a0c4d) is 8M, max 158.6M, 150.6M free. Dec 16 13:05:11.772608 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:05:11.783675 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:05:11.784071 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:05:12.174245 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:05:12.175313 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:05:12.178369 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:05:12.181333 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:05:12.184308 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:05:12.185738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:05:12.188312 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:05:12.191441 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:05:12.194458 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:05:12.198597 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:05:12.198789 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:05:12.202436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:05:12.202584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:05:12.206425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:05:12.206575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:05:12.208468 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:05:12.208667 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:05:12.210395 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:05:12.210533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:05:12.213576 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:05:12.215500 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:05:12.219341 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:05:12.222577 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:05:12.234274 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:05:12.239343 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:05:12.242014 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:05:12.244289 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:05:12.244324 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:05:12.248177 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:05:12.252739 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:05:12.256366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:12.260893 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:05:12.264635 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:05:12.265121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:05:12.267443 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:05:12.273198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:05:12.275549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:05:12.289228 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:05:12.293256 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:05:12.296911 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:05:12.300488 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:05:12.316128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:05:12.327756 kernel: ACPI: bus type drm_connector registered Dec 16 13:05:12.328084 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:05:12.328268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:05:12.331439 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:05:12.334586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:05:12.336643 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:05:12.347705 systemd-journald[1253]: Time spent on flushing to /var/log/journal/a9e551fee37f45d886bf548e1d1a0c4d is 33.144ms for 1002 entries. Dec 16 13:05:12.347705 systemd-journald[1253]: System Journal (/var/log/journal/a9e551fee37f45d886bf548e1d1a0c4d) is 11.8M, max 2.6G, 2.6G free. Dec 16 13:05:12.406053 systemd-journald[1253]: Received client request to flush runtime journal. Dec 16 13:05:12.406148 systemd-journald[1253]: /var/log/journal/a9e551fee37f45d886bf548e1d1a0c4d/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Dec 16 13:05:12.406230 systemd-journald[1253]: Rotating system journal. Dec 16 13:05:12.384934 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Dec 16 13:05:12.384947 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Dec 16 13:05:12.388452 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:05:12.391672 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:05:12.407411 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:05:12.412825 kernel: loop0: detected capacity change from 0 to 224512 Dec 16 13:05:12.446421 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:05:12.453166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:12.493208 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:05:12.534204 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:05:12.573516 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:05:12.578249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:05:12.593900 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 16 13:05:12.593920 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 16 13:05:12.596024 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:05:12.785613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:05:12.896804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:05:12.899862 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:05:12.925900 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Dec 16 13:05:12.942221 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 13:05:13.130079 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:05:13.136484 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:05:13.201913 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:05:13.263953 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:05:13.300684 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:05:13.318218 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:05:13.318423 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:05:13.320886 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:05:13.322357 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:05:13.326213 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:05:13.331216 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:05:13.338370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#195 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:05:13.338579 kernel: loop3: detected capacity change from 0 to 27936 Dec 16 13:05:13.378211 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:05:13.388208 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:05:13.455796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:13.468311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:13.469255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:13.473418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:13.498759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:13.499261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:13.506311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:13.575681 systemd-networkd[1335]: lo: Link UP Dec 16 13:05:13.575688 systemd-networkd[1335]: lo: Gained carrier Dec 16 13:05:13.577017 systemd-networkd[1335]: Enumeration completed Dec 16 13:05:13.577104 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:05:13.579149 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:05:13.581776 systemd-networkd[1335]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:13.581784 systemd-networkd[1335]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:05:13.583394 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:05:13.586005 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:05:13.592223 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:05:13.592417 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd8ca0f7 eth0: Data path switched to VF: enP30832s1 Dec 16 13:05:13.592097 systemd-networkd[1335]: enP30832s1: Link UP Dec 16 13:05:13.592172 systemd-networkd[1335]: eth0: Link UP Dec 16 13:05:13.592175 systemd-networkd[1335]: eth0: Gained carrier Dec 16 13:05:13.592209 systemd-networkd[1335]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:13.596423 systemd-networkd[1335]: enP30832s1: Gained carrier Dec 16 13:05:13.600291 systemd-networkd[1335]: eth0: DHCPv4 address 10.200.0.34/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:05:13.648749 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:05:13.653517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:05:13.655739 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:05:13.690854 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:05:13.699204 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:05:13.760224 kernel: loop4: detected capacity change from 0 to 224512 Dec 16 13:05:13.770214 kernel: loop5: detected capacity change from 0 to 128560 Dec 16 13:05:13.779208 kernel: loop6: detected capacity change from 0 to 110984 Dec 16 13:05:13.789220 kernel: loop7: detected capacity change from 0 to 27936 Dec 16 13:05:13.813475 (sd-merge)[1426]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 13:05:13.813850 (sd-merge)[1426]: Merged extensions into '/usr'. Dec 16 13:05:13.817904 systemd[1]: Reload requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:05:13.818022 systemd[1]: Reloading... Dec 16 13:05:13.873205 zram_generator::config[1452]: No configuration found. Dec 16 13:05:14.067876 systemd[1]: Reloading finished in 249 ms. Dec 16 13:05:14.083149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:14.086611 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:05:14.095121 systemd[1]: Starting ensure-sysext.service... Dec 16 13:05:14.101330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:05:14.121868 systemd[1]: Reload requested from client PID 1518 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:05:14.121887 systemd[1]: Reloading... Dec 16 13:05:14.124945 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:05:14.125237 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:05:14.125549 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:05:14.125824 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:05:14.126581 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:05:14.126893 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Dec 16 13:05:14.126983 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Dec 16 13:05:14.147622 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:05:14.147631 systemd-tmpfiles[1519]: Skipping /boot Dec 16 13:05:14.158666 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:05:14.158682 systemd-tmpfiles[1519]: Skipping /boot Dec 16 13:05:14.192231 zram_generator::config[1550]: No configuration found. Dec 16 13:05:14.370968 systemd[1]: Reloading finished in 248 ms. Dec 16 13:05:14.390885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:05:14.400160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:14.401075 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:05:14.405410 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:05:14.407262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:14.410010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:05:14.413993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:05:14.418637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:05:14.421061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:14.421203 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:14.424134 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:05:14.431279 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:05:14.443383 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:05:14.445847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:14.448394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:05:14.449636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:05:14.452664 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:05:14.452825 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:05:14.455404 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:05:14.455555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:05:14.463783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:14.463961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:14.469334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:05:14.472552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:05:14.476397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:05:14.480328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:14.480584 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:14.480911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:14.490021 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:05:14.498759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:14.499141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:14.500872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:05:14.504580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:14.504703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:14.504864 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:05:14.506919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:14.507686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:05:14.508798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:05:14.511787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:05:14.512386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:05:14.515429 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:05:14.515588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:05:14.521111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:05:14.521919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:05:14.522304 systemd[1]: Finished ensure-sysext.service. Dec 16 13:05:14.532580 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:05:14.532822 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:05:14.547383 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:05:14.575878 systemd-resolved[1616]: Positive Trust Anchors: Dec 16 13:05:14.575890 systemd-resolved[1616]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:05:14.575924 systemd-resolved[1616]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:05:14.595237 systemd-resolved[1616]: Using system hostname 'ci-4459.2.2-a-d193ee2782'. Dec 16 13:05:14.596603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:05:14.601004 augenrules[1654]: No rules Dec 16 13:05:14.601555 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:05:14.601692 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:05:14.604809 systemd[1]: Reached target network.target - Network. Dec 16 13:05:14.606123 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:05:14.956325 systemd-networkd[1335]: eth0: Gained IPv6LL Dec 16 13:05:14.958567 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:05:14.963235 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:05:15.230061 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:05:15.233537 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:05:18.321019 ldconfig[1295]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:05:18.346023 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:05:18.351654 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:05:18.367919 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:05:18.372438 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:05:18.383368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:05:18.384860 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:05:18.388240 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:05:18.389756 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:05:18.392301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:05:18.393793 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:05:18.397245 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:05:18.397286 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:05:18.398390 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:05:18.402127 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:05:18.404529 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:05:18.407409 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:05:18.408975 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:05:18.410504 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:05:18.423631 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:05:18.426738 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:05:18.429747 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:05:18.431855 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:05:18.432874 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:05:18.435297 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:05:18.435322 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:05:18.437113 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:05:18.442268 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:05:18.446343 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:05:18.449329 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:05:18.453362 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:05:18.459350 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:05:18.464420 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:05:18.466695 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:05:18.470154 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:05:18.470708 jq[1672]: false Dec 16 13:05:18.472091 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:05:18.474416 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:05:18.476722 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:05:18.482386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:18.486341 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:05:18.488908 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:05:18.499915 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:05:18.505048 KVP[1678]: KVP starting; pid is:1678 Dec 16 13:05:18.507445 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:05:18.510781 KVP[1678]: KVP LIC Version: 3.1 Dec 16 13:05:18.511287 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:05:18.514413 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:05:18.525300 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:05:18.528144 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:05:18.528595 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:05:18.530451 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:05:18.532424 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Refreshing passwd entry cache Dec 16 13:05:18.532141 oslogin_cache_refresh[1677]: Refreshing passwd entry cache Dec 16 13:05:18.537365 extend-filesystems[1673]: Found /dev/nvme0n1p6 Dec 16 13:05:18.537507 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:05:18.542379 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:05:18.548572 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:05:18.548749 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:05:18.553986 extend-filesystems[1673]: Found /dev/nvme0n1p9 Dec 16 13:05:18.553229 oslogin_cache_refresh[1677]: Failure getting users, quitting Dec 16 13:05:18.555931 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Failure getting users, quitting Dec 16 13:05:18.555931 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:05:18.555931 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Refreshing group entry cache Dec 16 13:05:18.553245 oslogin_cache_refresh[1677]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:05:18.553287 oslogin_cache_refresh[1677]: Refreshing group entry cache Dec 16 13:05:18.560631 extend-filesystems[1673]: Checking size of /dev/nvme0n1p9 Dec 16 13:05:18.562547 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:05:18.565973 jq[1692]: true Dec 16 13:05:18.563251 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:05:18.568462 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Failure getting groups, quitting Dec 16 13:05:18.568462 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:05:18.567336 oslogin_cache_refresh[1677]: Failure getting groups, quitting Dec 16 13:05:18.567347 oslogin_cache_refresh[1677]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:05:18.568923 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:05:18.569106 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:05:18.593139 jq[1706]: true Dec 16 13:05:18.599555 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:05:18.602404 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:05:18.602602 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:05:18.608015 chronyd[1667]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:05:18.616518 extend-filesystems[1673]: Old size kept for /dev/nvme0n1p9 Dec 16 13:05:18.615874 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:05:18.616054 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:05:18.647748 update_engine[1690]: I20251216 13:05:18.645172 1690 main.cc:92] Flatcar Update Engine starting Dec 16 13:05:18.661563 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:05:18.664661 chronyd[1667]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:05:18.664875 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:05:18.664803 chronyd[1667]: Loaded seccomp filter (level 2) Dec 16 13:05:18.672558 tar[1700]: linux-amd64/LICENSE Dec 16 13:05:18.672558 tar[1700]: linux-amd64/helm Dec 16 13:05:18.731962 bash[1745]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:05:18.733493 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:05:18.736950 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:05:18.760157 systemd-logind[1688]: New seat seat0. Dec 16 13:05:18.760835 dbus-daemon[1670]: [system] SELinux support is enabled Dec 16 13:05:18.760960 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:05:18.768254 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:05:18.768285 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:05:18.775909 systemd-logind[1688]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:05:18.778274 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:05:18.778294 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:05:18.780908 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:05:18.783106 update_engine[1690]: I20251216 13:05:18.783054 1690 update_check_scheduler.cc:74] Next update check in 8m1s Dec 16 13:05:18.786312 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:05:18.794593 dbus-daemon[1670]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:05:18.809348 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:05:18.901156 coreos-metadata[1669]: Dec 16 13:05:18.901 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:05:18.919664 coreos-metadata[1669]: Dec 16 13:05:18.919 INFO Fetch successful Dec 16 13:05:18.922534 coreos-metadata[1669]: Dec 16 13:05:18.922 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:05:18.927276 coreos-metadata[1669]: Dec 16 13:05:18.927 INFO Fetch successful Dec 16 13:05:18.927343 coreos-metadata[1669]: Dec 16 13:05:18.927 INFO Fetching http://168.63.129.16/machine/decbc5de-be20-4f22-a82d-8b5b9e71b536/9b0b7113%2D42f9%2D4c8c%2Db7e2%2D04f9aa715579.%5Fci%2D4459.2.2%2Da%2Dd193ee2782?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:05:18.931380 coreos-metadata[1669]: Dec 16 13:05:18.931 INFO Fetch successful Dec 16 13:05:18.931589 coreos-metadata[1669]: Dec 16 13:05:18.931 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:05:18.939217 coreos-metadata[1669]: Dec 16 13:05:18.939 INFO Fetch successful Dec 16 13:05:19.006176 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:05:19.009444 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:05:19.109112 locksmithd[1760]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:05:19.376972 tar[1700]: linux-amd64/README.md Dec 16 13:05:19.403336 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:05:19.510432 sshd_keygen[1719]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:05:19.537458 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:05:19.541389 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:05:19.545903 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:05:19.558167 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:05:19.558397 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:05:19.563442 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:05:19.584677 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:05:19.591331 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:05:19.594518 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:05:19.599394 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:05:19.603114 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:05:19.918246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:19.983374 containerd[1714]: time="2025-12-16T13:05:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:05:19.984018 containerd[1714]: time="2025-12-16T13:05:19.983989073Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:05:19.990587 containerd[1714]: time="2025-12-16T13:05:19.990554127Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.075µs" Dec 16 13:05:19.990587 containerd[1714]: time="2025-12-16T13:05:19.990579687Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:05:19.990674 containerd[1714]: time="2025-12-16T13:05:19.990597185Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:05:19.990753 containerd[1714]: time="2025-12-16T13:05:19.990738281Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:05:19.990778 containerd[1714]: time="2025-12-16T13:05:19.990752488Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:05:19.990778 containerd[1714]: time="2025-12-16T13:05:19.990773273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:05:19.990852 containerd[1714]: time="2025-12-16T13:05:19.990826824Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:05:19.990852 containerd[1714]: time="2025-12-16T13:05:19.990848332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991044 containerd[1714]: time="2025-12-16T13:05:19.991026887Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991044 containerd[1714]: time="2025-12-16T13:05:19.991040003Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991092 containerd[1714]: time="2025-12-16T13:05:19.991050033Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991092 containerd[1714]: time="2025-12-16T13:05:19.991057554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991140 containerd[1714]: time="2025-12-16T13:05:19.991117141Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991290 containerd[1714]: time="2025-12-16T13:05:19.991274859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991327 containerd[1714]: time="2025-12-16T13:05:19.991298013Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:05:19.991327 containerd[1714]: time="2025-12-16T13:05:19.991307459Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:05:19.991370 containerd[1714]: time="2025-12-16T13:05:19.991332845Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:05:19.991548 containerd[1714]: time="2025-12-16T13:05:19.991526986Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:05:19.991586 containerd[1714]: time="2025-12-16T13:05:19.991575522Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:05:20.007596 containerd[1714]: time="2025-12-16T13:05:20.007551815Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:05:20.007671 containerd[1714]: time="2025-12-16T13:05:20.007611494Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:05:20.007671 containerd[1714]: time="2025-12-16T13:05:20.007625503Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:05:20.007671 containerd[1714]: time="2025-12-16T13:05:20.007642541Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:05:20.007671 containerd[1714]: time="2025-12-16T13:05:20.007654340Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:05:20.007671 containerd[1714]: time="2025-12-16T13:05:20.007664806Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:05:20.007779 containerd[1714]: time="2025-12-16T13:05:20.007677358Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:05:20.007779 containerd[1714]: time="2025-12-16T13:05:20.007688797Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:05:20.007779 containerd[1714]: time="2025-12-16T13:05:20.007704283Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:05:20.007779 containerd[1714]: time="2025-12-16T13:05:20.007715184Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:05:20.007779 containerd[1714]: time="2025-12-16T13:05:20.007724580Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:05:20.007779 containerd[1714]: time="2025-12-16T13:05:20.007736531Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:05:20.007890 containerd[1714]: time="2025-12-16T13:05:20.007836289Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:05:20.007890 containerd[1714]: time="2025-12-16T13:05:20.007852611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:05:20.007890 containerd[1714]: time="2025-12-16T13:05:20.007875130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:05:20.007890 containerd[1714]: time="2025-12-16T13:05:20.007885865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:05:20.007962 containerd[1714]: time="2025-12-16T13:05:20.007895762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:05:20.007962 containerd[1714]: time="2025-12-16T13:05:20.007906503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:05:20.007962 containerd[1714]: time="2025-12-16T13:05:20.007917202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:05:20.007962 containerd[1714]: time="2025-12-16T13:05:20.007926740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:05:20.007962 containerd[1714]: time="2025-12-16T13:05:20.007937964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:05:20.007962 containerd[1714]: time="2025-12-16T13:05:20.007948246Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:05:20.007962 containerd[1714]: time="2025-12-16T13:05:20.007957227Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:05:20.008088 containerd[1714]: time="2025-12-16T13:05:20.007994304Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:05:20.008088 containerd[1714]: time="2025-12-16T13:05:20.008006308Z" level=info msg="Start snapshots syncer" Dec 16 13:05:20.008088 containerd[1714]: time="2025-12-16T13:05:20.008025093Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:05:20.008350 containerd[1714]: time="2025-12-16T13:05:20.008319683Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:05:20.008486 containerd[1714]: time="2025-12-16T13:05:20.008365323Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:05:20.008486 containerd[1714]: time="2025-12-16T13:05:20.008412236Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:05:20.008530 containerd[1714]: time="2025-12-16T13:05:20.008498704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:05:20.008530 containerd[1714]: time="2025-12-16T13:05:20.008516474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:05:20.008530 containerd[1714]: time="2025-12-16T13:05:20.008526997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:05:20.008590 containerd[1714]: time="2025-12-16T13:05:20.008536936Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:05:20.008590 containerd[1714]: time="2025-12-16T13:05:20.008551042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:05:20.008590 containerd[1714]: time="2025-12-16T13:05:20.008560707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:05:20.008590 containerd[1714]: time="2025-12-16T13:05:20.008570533Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:05:20.008661 containerd[1714]: time="2025-12-16T13:05:20.008593442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:05:20.008661 containerd[1714]: time="2025-12-16T13:05:20.008628049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:05:20.008661 containerd[1714]: time="2025-12-16T13:05:20.008643267Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:05:20.008722 containerd[1714]: time="2025-12-16T13:05:20.008664317Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:05:20.008722 containerd[1714]: time="2025-12-16T13:05:20.008677233Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:05:20.008722 containerd[1714]: time="2025-12-16T13:05:20.008686473Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:05:20.008722 containerd[1714]: time="2025-12-16T13:05:20.008695643Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:05:20.008722 containerd[1714]: time="2025-12-16T13:05:20.008703181Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:05:20.008722 containerd[1714]: time="2025-12-16T13:05:20.008712614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:05:20.008831 containerd[1714]: time="2025-12-16T13:05:20.008727779Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:05:20.008831 containerd[1714]: time="2025-12-16T13:05:20.008742735Z" level=info msg="runtime interface created" Dec 16 13:05:20.008831 containerd[1714]: time="2025-12-16T13:05:20.008747769Z" level=info msg="created NRI interface" Dec 16 13:05:20.008831 containerd[1714]: time="2025-12-16T13:05:20.008755874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:05:20.008831 containerd[1714]: time="2025-12-16T13:05:20.008767403Z" level=info msg="Connect containerd service" Dec 16 13:05:20.008831 containerd[1714]: time="2025-12-16T13:05:20.008792628Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:05:20.009460 containerd[1714]: time="2025-12-16T13:05:20.009430897Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:05:20.089290 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:20.584430 kubelet[1823]: E1216 13:05:20.584385 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:20.586434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:20.586575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:20.586884 systemd[1]: kubelet.service: Consumed 937ms CPU time, 263.3M memory peak. Dec 16 13:05:20.631136 containerd[1714]: time="2025-12-16T13:05:20.631095143Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:05:20.631247 containerd[1714]: time="2025-12-16T13:05:20.631159256Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:05:20.631335 containerd[1714]: time="2025-12-16T13:05:20.631320854Z" level=info msg="Start subscribing containerd event" Dec 16 13:05:20.631386 containerd[1714]: time="2025-12-16T13:05:20.631361690Z" level=info msg="Start recovering state" Dec 16 13:05:20.631469 containerd[1714]: time="2025-12-16T13:05:20.631457967Z" level=info msg="Start event monitor" Dec 16 13:05:20.631492 containerd[1714]: time="2025-12-16T13:05:20.631475238Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:05:20.631492 containerd[1714]: time="2025-12-16T13:05:20.631484141Z" level=info msg="Start streaming server" Dec 16 13:05:20.631545 containerd[1714]: time="2025-12-16T13:05:20.631495627Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:05:20.631545 containerd[1714]: time="2025-12-16T13:05:20.631508278Z" level=info msg="runtime interface starting up..." Dec 16 13:05:20.631545 containerd[1714]: time="2025-12-16T13:05:20.631514833Z" level=info msg="starting plugins..." Dec 16 13:05:20.631545 containerd[1714]: time="2025-12-16T13:05:20.631528342Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:05:20.631647 containerd[1714]: time="2025-12-16T13:05:20.631635402Z" level=info msg="containerd successfully booted in 0.648567s" Dec 16 13:05:20.632300 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:05:20.635576 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:05:20.638472 systemd[1]: Startup finished in 3.170s (kernel) + 14.474s (initrd) + 11.685s (userspace) = 29.330s. Dec 16 13:05:20.896867 login[1812]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 16 13:05:20.897646 login[1810]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:05:20.902836 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:05:20.903925 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:05:20.911228 systemd-logind[1688]: New session 1 of user core. Dec 16 13:05:20.937155 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:05:20.943263 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:05:20.954280 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:05:20.956029 systemd-logind[1688]: New session c1 of user core. Dec 16 13:05:21.170685 systemd[1852]: Queued start job for default target default.target. Dec 16 13:05:21.179932 systemd[1852]: Created slice app.slice - User Application Slice. Dec 16 13:05:21.179958 systemd[1852]: Reached target paths.target - Paths. Dec 16 13:05:21.179990 systemd[1852]: Reached target timers.target - Timers. Dec 16 13:05:21.180910 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:05:21.189543 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:05:21.189596 systemd[1852]: Reached target sockets.target - Sockets. Dec 16 13:05:21.189631 systemd[1852]: Reached target basic.target - Basic System. Dec 16 13:05:21.189693 systemd[1852]: Reached target default.target - Main User Target. Dec 16 13:05:21.189718 systemd[1852]: Startup finished in 229ms. Dec 16 13:05:21.189777 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:05:21.191557 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:05:21.432402 waagent[1815]: 2025-12-16T13:05:21.432266Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.432942Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.433304Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.434015Z INFO Daemon Daemon Run daemon Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.434478Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.434711Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.434869Z INFO Daemon Daemon Activate resource disk Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.434931Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.436457Z INFO Daemon Daemon Found device: None Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.436716Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.436948Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.437453Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:05:21.442348 waagent[1815]: 2025-12-16T13:05:21.437678Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:05:21.463054 waagent[1815]: 2025-12-16T13:05:21.455180Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:05:21.463054 waagent[1815]: 2025-12-16T13:05:21.456167Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:05:21.463054 waagent[1815]: 2025-12-16T13:05:21.456718Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:05:21.463054 waagent[1815]: 2025-12-16T13:05:21.456780Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:05:21.535178 waagent[1815]: 2025-12-16T13:05:21.535123Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:05:21.561688 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:05:21.563830 waagent[1815]: 2025-12-16T13:05:21.563785Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:05:21.564950 waagent[1815]: 2025-12-16T13:05:21.564880Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:05:21.566326 waagent[1815]: 2025-12-16T13:05:21.566268Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:05:21.566961 waagent[1815]: 2025-12-16T13:05:21.566676Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:05:21.572877 waagent[1815]: 2025-12-16T13:05:21.567080Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:05:21.572877 waagent[1815]: 2025-12-16T13:05:21.567330Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:05:21.578367 waagent[1815]: 2025-12-16T13:05:21.578331Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:05:21.580418 waagent[1815]: 2025-12-16T13:05:21.578923Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:05:21.580418 waagent[1815]: 2025-12-16T13:05:21.579163Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:05:21.690487 waagent[1815]: 2025-12-16T13:05:21.690358Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:05:21.692394 waagent[1815]: 2025-12-16T13:05:21.692301Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:05:21.697312 waagent[1815]: 2025-12-16T13:05:21.697274Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:05:21.710729 waagent[1815]: 2025-12-16T13:05:21.710698Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 13:05:21.712253 waagent[1815]: 2025-12-16T13:05:21.712218Z INFO Daemon Dec 16 13:05:21.713054 waagent[1815]: 2025-12-16T13:05:21.712966Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 20e19e22-9ba3-4683-bad8-5b86befb797d eTag: 18346328996635640408 source: Fabric] Dec 16 13:05:21.715701 waagent[1815]: 2025-12-16T13:05:21.715663Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:05:21.717290 waagent[1815]: 2025-12-16T13:05:21.717259Z INFO Daemon Dec 16 13:05:21.717957 waagent[1815]: 2025-12-16T13:05:21.717736Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:05:21.730466 waagent[1815]: 2025-12-16T13:05:21.730439Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:05:21.812524 waagent[1815]: 2025-12-16T13:05:21.812473Z INFO Daemon Downloaded certificate {'thumbprint': '342A91527E17CF0AFDA707C616B2E7D57D88ABCD', 'hasPrivateKey': True} Dec 16 13:05:21.815129 waagent[1815]: 2025-12-16T13:05:21.815096Z INFO Daemon Fetch goal state completed Dec 16 13:05:21.820750 waagent[1815]: 2025-12-16T13:05:21.820720Z INFO Daemon Daemon Starting provisioning Dec 16 13:05:21.821443 waagent[1815]: 2025-12-16T13:05:21.821412Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:05:21.821661 waagent[1815]: 2025-12-16T13:05:21.821638Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-d193ee2782] Dec 16 13:05:21.825224 waagent[1815]: 2025-12-16T13:05:21.825167Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-d193ee2782] Dec 16 13:05:21.825939 waagent[1815]: 2025-12-16T13:05:21.825789Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:05:21.826612 waagent[1815]: 2025-12-16T13:05:21.826067Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:05:21.833609 systemd-networkd[1335]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:21.833616 systemd-networkd[1335]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:05:21.833637 systemd-networkd[1335]: eth0: DHCP lease lost Dec 16 13:05:21.834423 waagent[1815]: 2025-12-16T13:05:21.834373Z INFO Daemon Daemon Create user account if not exists Dec 16 13:05:21.835220 waagent[1815]: 2025-12-16T13:05:21.834717Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:05:21.835220 waagent[1815]: 2025-12-16T13:05:21.834943Z INFO Daemon Daemon Configure sudoer Dec 16 13:05:21.840770 waagent[1815]: 2025-12-16T13:05:21.840724Z INFO Daemon Daemon Configure sshd Dec 16 13:05:21.846630 waagent[1815]: 2025-12-16T13:05:21.846588Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:05:21.847294 waagent[1815]: 2025-12-16T13:05:21.847005Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:05:21.863235 systemd-networkd[1335]: eth0: DHCPv4 address 10.200.0.34/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:05:21.898268 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:05:21.902512 systemd-logind[1688]: New session 2 of user core. Dec 16 13:05:21.908318 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:05:22.925568 waagent[1815]: 2025-12-16T13:05:22.925511Z INFO Daemon Daemon Provisioning complete Dec 16 13:05:22.935214 waagent[1815]: 2025-12-16T13:05:22.935171Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:05:22.935965 waagent[1815]: 2025-12-16T13:05:22.935668Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:05:22.939460 waagent[1815]: 2025-12-16T13:05:22.936007Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:05:23.036237 waagent[1902]: 2025-12-16T13:05:23.036152Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:05:23.036510 waagent[1902]: 2025-12-16T13:05:23.036272Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 13:05:23.036510 waagent[1902]: 2025-12-16T13:05:23.036313Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:05:23.036510 waagent[1902]: 2025-12-16T13:05:23.036354Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:05:23.076616 waagent[1902]: 2025-12-16T13:05:23.076567Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:05:23.076753 waagent[1902]: 2025-12-16T13:05:23.076729Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:23.076804 waagent[1902]: 2025-12-16T13:05:23.076781Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:23.087105 waagent[1902]: 2025-12-16T13:05:23.087049Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:05:23.096863 waagent[1902]: 2025-12-16T13:05:23.096830Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 13:05:23.097226 waagent[1902]: 2025-12-16T13:05:23.097178Z INFO ExtHandler Dec 16 13:05:23.097271 waagent[1902]: 2025-12-16T13:05:23.097257Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e965227b-ce28-4b6f-8fea-e3404eb8e756 eTag: 18346328996635640408 source: Fabric] Dec 16 13:05:23.097485 waagent[1902]: 2025-12-16T13:05:23.097457Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:05:23.097825 waagent[1902]: 2025-12-16T13:05:23.097797Z INFO ExtHandler Dec 16 13:05:23.097862 waagent[1902]: 2025-12-16T13:05:23.097840Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:05:23.106458 waagent[1902]: 2025-12-16T13:05:23.106429Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:05:23.169078 waagent[1902]: 2025-12-16T13:05:23.169027Z INFO ExtHandler Downloaded certificate {'thumbprint': '342A91527E17CF0AFDA707C616B2E7D57D88ABCD', 'hasPrivateKey': True} Dec 16 13:05:23.169438 waagent[1902]: 2025-12-16T13:05:23.169409Z INFO ExtHandler Fetch goal state completed Dec 16 13:05:23.179806 waagent[1902]: 2025-12-16T13:05:23.179727Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 13:05:23.183802 waagent[1902]: 2025-12-16T13:05:23.183757Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1902 Dec 16 13:05:23.183919 waagent[1902]: 2025-12-16T13:05:23.183895Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:05:23.184147 waagent[1902]: 2025-12-16T13:05:23.184123Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:05:23.185142 waagent[1902]: 2025-12-16T13:05:23.185109Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:05:23.185449 waagent[1902]: 2025-12-16T13:05:23.185423Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:05:23.185553 waagent[1902]: 2025-12-16T13:05:23.185533Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:05:23.185932 waagent[1902]: 2025-12-16T13:05:23.185909Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:05:23.251288 waagent[1902]: 2025-12-16T13:05:23.251262Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:05:23.251435 waagent[1902]: 2025-12-16T13:05:23.251406Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:05:23.256975 waagent[1902]: 2025-12-16T13:05:23.256622Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:05:23.261659 systemd[1]: Reload requested from client PID 1917 ('systemctl') (unit waagent.service)... Dec 16 13:05:23.261673 systemd[1]: Reloading... Dec 16 13:05:23.327209 zram_generator::config[1956]: No configuration found. Dec 16 13:05:23.507129 systemd[1]: Reloading finished in 245 ms. Dec 16 13:05:23.522755 waagent[1902]: 2025-12-16T13:05:23.522683Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:05:23.522854 waagent[1902]: 2025-12-16T13:05:23.522832Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:05:23.600215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#197 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:05:23.832891 waagent[1902]: 2025-12-16T13:05:23.832773Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:05:23.833110 waagent[1902]: 2025-12-16T13:05:23.833084Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:05:23.833992 waagent[1902]: 2025-12-16T13:05:23.833755Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:05:23.833992 waagent[1902]: 2025-12-16T13:05:23.833892Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:23.834147 waagent[1902]: 2025-12-16T13:05:23.834118Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:23.834368 waagent[1902]: 2025-12-16T13:05:23.834346Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:05:23.834581 waagent[1902]: 2025-12-16T13:05:23.834556Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:05:23.834674 waagent[1902]: 2025-12-16T13:05:23.834650Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:05:23.834674 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:05:23.834674 waagent[1902]: eth0 00000000 0100C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:05:23.834674 waagent[1902]: eth0 0000C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:05:23.834674 waagent[1902]: eth0 0100C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:23.834674 waagent[1902]: eth0 10813FA8 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:23.834674 waagent[1902]: eth0 FEA9FEA9 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:23.835002 waagent[1902]: 2025-12-16T13:05:23.834932Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:23.835002 waagent[1902]: 2025-12-16T13:05:23.834987Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:23.835214 waagent[1902]: 2025-12-16T13:05:23.835161Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:05:23.835258 waagent[1902]: 2025-12-16T13:05:23.835218Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:05:23.835496 waagent[1902]: 2025-12-16T13:05:23.835439Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:05:23.835542 waagent[1902]: 2025-12-16T13:05:23.835503Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:05:23.835843 waagent[1902]: 2025-12-16T13:05:23.835815Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:05:23.835880 waagent[1902]: 2025-12-16T13:05:23.835866Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:05:23.835914 waagent[1902]: 2025-12-16T13:05:23.835897Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:05:23.836344 waagent[1902]: 2025-12-16T13:05:23.836321Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:05:23.847566 waagent[1902]: 2025-12-16T13:05:23.847521Z INFO ExtHandler ExtHandler Dec 16 13:05:23.847629 waagent[1902]: 2025-12-16T13:05:23.847584Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: aa4a882c-da79-41e4-9734-d379b78dbf4d correlation 2a66e3e6-89d5-4b9d-a5bb-535ffce17707 created: 2025-12-16T13:04:20.914673Z] Dec 16 13:05:23.847871 waagent[1902]: 2025-12-16T13:05:23.847843Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:05:23.848273 waagent[1902]: 2025-12-16T13:05:23.848249Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:05:23.874591 waagent[1902]: 2025-12-16T13:05:23.874545Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:05:23.874591 waagent[1902]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:05:23.874875 waagent[1902]: 2025-12-16T13:05:23.874849Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E7C8C42D-C0EB-4EBC-ACA6-4F36CE7321B4;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:05:23.911681 waagent[1902]: 2025-12-16T13:05:23.911635Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:05:23.911681 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:05:23.911681 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:05:23.911681 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:8c:a0:f7 brd ff:ff:ff:ff:ff:ff\ alias Network Device Dec 16 13:05:23.911681 waagent[1902]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:8c:a0:f7 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:05:23.911681 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:05:23.911681 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:05:23.911681 waagent[1902]: 2: eth0 inet 10.200.0.34/24 metric 1024 brd 10.200.0.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:05:23.911681 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:05:23.911681 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:05:23.911681 waagent[1902]: 2: eth0 inet6 fe80::6245:bdff:fe8c:a0f7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:05:23.940131 waagent[1902]: 2025-12-16T13:05:23.940085Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:05:23.940131 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:23.940131 waagent[1902]: pkts bytes target prot opt in out source destination Dec 16 13:05:23.940131 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:23.940131 waagent[1902]: pkts bytes target prot opt in out source destination Dec 16 13:05:23.940131 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:23.940131 waagent[1902]: pkts bytes target prot opt in out source destination Dec 16 13:05:23.940131 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:05:23.940131 waagent[1902]: 4 408 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:05:23.940131 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:05:23.942941 waagent[1902]: 2025-12-16T13:05:23.942894Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:05:23.942941 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:23.942941 waagent[1902]: pkts bytes target prot opt in out source destination Dec 16 13:05:23.942941 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:23.942941 waagent[1902]: pkts bytes target prot opt in out source destination Dec 16 13:05:23.942941 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:23.942941 waagent[1902]: pkts bytes target prot opt in out source destination Dec 16 13:05:23.942941 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:05:23.942941 waagent[1902]: 9 874 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:05:23.942941 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:05:30.837377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:05:30.838777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:31.320239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:31.323331 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:31.357323 kubelet[2055]: E1216 13:05:31.357285 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:31.360200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:31.360345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:31.360640 systemd[1]: kubelet.service: Consumed 131ms CPU time, 107.8M memory peak. Dec 16 13:05:37.401371 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:05:37.402374 systemd[1]: Started sshd@0-10.200.0.34:22-10.200.16.10:52306.service - OpenSSH per-connection server daemon (10.200.16.10:52306). Dec 16 13:05:38.072881 sshd[2064]: Accepted publickey for core from 10.200.16.10 port 52306 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:38.073943 sshd-session[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:38.078241 systemd-logind[1688]: New session 3 of user core. Dec 16 13:05:38.083339 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:05:38.627592 systemd[1]: Started sshd@1-10.200.0.34:22-10.200.16.10:52310.service - OpenSSH per-connection server daemon (10.200.16.10:52310). Dec 16 13:05:39.234147 sshd[2070]: Accepted publickey for core from 10.200.16.10 port 52310 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:39.235314 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:39.239495 systemd-logind[1688]: New session 4 of user core. Dec 16 13:05:39.245327 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:05:39.626462 sshd[2073]: Connection closed by 10.200.16.10 port 52310 Dec 16 13:05:39.627247 sshd-session[2070]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:39.630223 systemd[1]: sshd@1-10.200.0.34:22-10.200.16.10:52310.service: Deactivated successfully. Dec 16 13:05:39.631828 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:05:39.632596 systemd-logind[1688]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:05:39.633748 systemd-logind[1688]: Removed session 4. Dec 16 13:05:39.722819 systemd[1]: Started sshd@2-10.200.0.34:22-10.200.16.10:52326.service - OpenSSH per-connection server daemon (10.200.16.10:52326). Dec 16 13:05:40.275121 sshd[2079]: Accepted publickey for core from 10.200.16.10 port 52326 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:40.276216 sshd-session[2079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:40.280639 systemd-logind[1688]: New session 5 of user core. Dec 16 13:05:40.290339 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:05:40.664793 sshd[2082]: Connection closed by 10.200.16.10 port 52326 Dec 16 13:05:40.665459 sshd-session[2079]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:40.668165 systemd[1]: sshd@2-10.200.0.34:22-10.200.16.10:52326.service: Deactivated successfully. Dec 16 13:05:40.669813 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:05:40.671149 systemd-logind[1688]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:05:40.672343 systemd-logind[1688]: Removed session 5. Dec 16 13:05:40.798782 systemd[1]: Started sshd@3-10.200.0.34:22-10.200.16.10:50562.service - OpenSSH per-connection server daemon (10.200.16.10:50562). Dec 16 13:05:41.350653 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 50562 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:41.351804 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:41.356218 systemd-logind[1688]: New session 6 of user core. Dec 16 13:05:41.361346 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:05:41.362653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:05:41.364065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:41.739150 sshd[2092]: Connection closed by 10.200.16.10 port 50562 Dec 16 13:05:41.739919 sshd-session[2088]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:41.742612 systemd[1]: sshd@3-10.200.0.34:22-10.200.16.10:50562.service: Deactivated successfully. Dec 16 13:05:41.744606 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:05:41.745348 systemd-logind[1688]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:05:41.746302 systemd-logind[1688]: Removed session 6. Dec 16 13:05:41.843287 systemd[1]: Started sshd@4-10.200.0.34:22-10.200.16.10:50576.service - OpenSSH per-connection server daemon (10.200.16.10:50576). Dec 16 13:05:41.874354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:41.882439 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:41.915464 kubelet[2108]: E1216 13:05:41.915436 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:41.917078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:41.917244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:41.917584 systemd[1]: kubelet.service: Consumed 127ms CPU time, 108.6M memory peak. Dec 16 13:05:42.400626 sshd[2100]: Accepted publickey for core from 10.200.16.10 port 50576 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:42.401719 sshd-session[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:42.406337 systemd-logind[1688]: New session 7 of user core. Dec 16 13:05:42.412326 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:05:42.447622 chronyd[1667]: Selected source PHC0 Dec 16 13:05:42.824987 sudo[2116]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:05:42.825216 sudo[2116]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:42.849909 sudo[2116]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:42.936618 sshd[2115]: Connection closed by 10.200.16.10 port 50576 Dec 16 13:05:42.937355 sshd-session[2100]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:42.940974 systemd[1]: sshd@4-10.200.0.34:22-10.200.16.10:50576.service: Deactivated successfully. Dec 16 13:05:42.942568 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:05:42.943481 systemd-logind[1688]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:05:42.944572 systemd-logind[1688]: Removed session 7. Dec 16 13:05:43.036983 systemd[1]: Started sshd@5-10.200.0.34:22-10.200.16.10:50592.service - OpenSSH per-connection server daemon (10.200.16.10:50592). Dec 16 13:05:43.591583 sshd[2122]: Accepted publickey for core from 10.200.16.10 port 50592 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:43.592697 sshd-session[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:43.596239 systemd-logind[1688]: New session 8 of user core. Dec 16 13:05:43.606327 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:05:43.895795 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:05:43.896015 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:43.905748 sudo[2127]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:43.909617 sudo[2126]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:05:43.909826 sudo[2126]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:43.917585 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:05:43.945297 augenrules[2149]: No rules Dec 16 13:05:43.946177 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:05:43.946426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:05:43.947081 sudo[2126]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:44.034067 sshd[2125]: Connection closed by 10.200.16.10 port 50592 Dec 16 13:05:44.034596 sshd-session[2122]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:44.037453 systemd[1]: sshd@5-10.200.0.34:22-10.200.16.10:50592.service: Deactivated successfully. Dec 16 13:05:44.038994 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:05:44.040813 systemd-logind[1688]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:05:44.041634 systemd-logind[1688]: Removed session 8. Dec 16 13:05:44.132020 systemd[1]: Started sshd@6-10.200.0.34:22-10.200.16.10:50594.service - OpenSSH per-connection server daemon (10.200.16.10:50594). Dec 16 13:05:44.690039 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 50594 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:44.691155 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:44.695530 systemd-logind[1688]: New session 9 of user core. Dec 16 13:05:44.701330 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:05:44.994032 sudo[2162]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:05:44.994269 sudo[2162]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:46.949512 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:05:46.958444 (dockerd)[2180]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:05:49.236018 dockerd[2180]: time="2025-12-16T13:05:49.235800072Z" level=info msg="Starting up" Dec 16 13:05:49.237719 dockerd[2180]: time="2025-12-16T13:05:49.237677697Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:05:49.246949 dockerd[2180]: time="2025-12-16T13:05:49.246914360Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:05:49.388368 dockerd[2180]: time="2025-12-16T13:05:49.388181251Z" level=info msg="Loading containers: start." Dec 16 13:05:49.431214 kernel: Initializing XFRM netlink socket Dec 16 13:05:49.734745 systemd-networkd[1335]: docker0: Link UP Dec 16 13:05:49.753937 dockerd[2180]: time="2025-12-16T13:05:49.753898090Z" level=info msg="Loading containers: done." Dec 16 13:05:49.769871 dockerd[2180]: time="2025-12-16T13:05:49.769836965Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:05:49.769985 dockerd[2180]: time="2025-12-16T13:05:49.769910485Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:05:49.769985 dockerd[2180]: time="2025-12-16T13:05:49.769981199Z" level=info msg="Initializing buildkit" Dec 16 13:05:49.813675 dockerd[2180]: time="2025-12-16T13:05:49.813644137Z" level=info msg="Completed buildkit initialization" Dec 16 13:05:49.819868 dockerd[2180]: time="2025-12-16T13:05:49.819826533Z" level=info msg="Daemon has completed initialization" Dec 16 13:05:49.820093 dockerd[2180]: time="2025-12-16T13:05:49.819967786Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:05:49.820061 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:05:50.774120 containerd[1714]: time="2025-12-16T13:05:50.774071665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:05:51.454629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851905390.mount: Deactivated successfully. Dec 16 13:05:52.128446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:05:52.130246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:52.619502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:52.626445 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:52.662691 kubelet[2446]: E1216 13:05:52.662638 2446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:52.664093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:52.664238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:52.664512 systemd[1]: kubelet.service: Consumed 129ms CPU time, 108.5M memory peak. Dec 16 13:05:53.189289 containerd[1714]: time="2025-12-16T13:05:53.189240129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:53.191878 containerd[1714]: time="2025-12-16T13:05:53.191844414Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29071621" Dec 16 13:05:53.195807 containerd[1714]: time="2025-12-16T13:05:53.195761979Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:53.200043 containerd[1714]: time="2025-12-16T13:05:53.199854470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:53.200516 containerd[1714]: time="2025-12-16T13:05:53.200490184Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.426370091s" Dec 16 13:05:53.200563 containerd[1714]: time="2025-12-16T13:05:53.200527909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:05:53.201276 containerd[1714]: time="2025-12-16T13:05:53.201253057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:05:54.348417 containerd[1714]: time="2025-12-16T13:05:54.348365535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:54.351468 containerd[1714]: time="2025-12-16T13:05:54.351427714Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24991942" Dec 16 13:05:54.354776 containerd[1714]: time="2025-12-16T13:05:54.354734641Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:54.360050 containerd[1714]: time="2025-12-16T13:05:54.359220626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:54.360050 containerd[1714]: time="2025-12-16T13:05:54.359828109Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.158494666s" Dec 16 13:05:54.360050 containerd[1714]: time="2025-12-16T13:05:54.359857200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:05:54.360575 containerd[1714]: time="2025-12-16T13:05:54.360508700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:05:55.352578 containerd[1714]: time="2025-12-16T13:05:55.352529475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:55.355212 containerd[1714]: time="2025-12-16T13:05:55.355129263Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404180" Dec 16 13:05:55.359156 containerd[1714]: time="2025-12-16T13:05:55.359114891Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:55.364439 containerd[1714]: time="2025-12-16T13:05:55.364391797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:55.365272 containerd[1714]: time="2025-12-16T13:05:55.365043415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.004504802s" Dec 16 13:05:55.365272 containerd[1714]: time="2025-12-16T13:05:55.365074958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:05:55.365789 containerd[1714]: time="2025-12-16T13:05:55.365771426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:05:56.206829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974059649.mount: Deactivated successfully. Dec 16 13:05:56.585455 containerd[1714]: time="2025-12-16T13:05:56.585341361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:56.588219 containerd[1714]: time="2025-12-16T13:05:56.588101758Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161317" Dec 16 13:05:56.592293 containerd[1714]: time="2025-12-16T13:05:56.592229454Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:56.597323 containerd[1714]: time="2025-12-16T13:05:56.596598664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:56.597323 containerd[1714]: time="2025-12-16T13:05:56.596878846Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.231005695s" Dec 16 13:05:56.597323 containerd[1714]: time="2025-12-16T13:05:56.596907257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:05:56.597576 containerd[1714]: time="2025-12-16T13:05:56.597562726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:05:57.104522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739493500.mount: Deactivated successfully. Dec 16 13:05:58.083834 containerd[1714]: time="2025-12-16T13:05:58.083781721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:58.088204 containerd[1714]: time="2025-12-16T13:05:58.088160129Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18564717" Dec 16 13:05:58.091952 containerd[1714]: time="2025-12-16T13:05:58.091719178Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:58.096564 containerd[1714]: time="2025-12-16T13:05:58.096533902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:58.097294 containerd[1714]: time="2025-12-16T13:05:58.097269813Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.499624473s" Dec 16 13:05:58.097346 containerd[1714]: time="2025-12-16T13:05:58.097303510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:05:58.098074 containerd[1714]: time="2025-12-16T13:05:58.098040839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:05:58.523628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount691595592.mount: Deactivated successfully. Dec 16 13:05:58.553849 containerd[1714]: time="2025-12-16T13:05:58.553807516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:58.557324 containerd[1714]: time="2025-12-16T13:05:58.557291026Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Dec 16 13:05:58.561864 containerd[1714]: time="2025-12-16T13:05:58.561821399Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:58.567354 containerd[1714]: time="2025-12-16T13:05:58.567314061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:58.567797 containerd[1714]: time="2025-12-16T13:05:58.567669745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 469.599336ms" Dec 16 13:05:58.567797 containerd[1714]: time="2025-12-16T13:05:58.567698259Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:05:58.568202 containerd[1714]: time="2025-12-16T13:05:58.568149424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:05:59.096440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721061807.mount: Deactivated successfully. Dec 16 13:06:00.810529 containerd[1714]: time="2025-12-16T13:06:00.810474362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:00.814054 containerd[1714]: time="2025-12-16T13:06:00.814017431Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57681646" Dec 16 13:06:00.818597 containerd[1714]: time="2025-12-16T13:06:00.818568255Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:00.825277 containerd[1714]: time="2025-12-16T13:06:00.824414623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:00.825277 containerd[1714]: time="2025-12-16T13:06:00.825141065Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.256967778s" Dec 16 13:06:00.825277 containerd[1714]: time="2025-12-16T13:06:00.825169168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:06:01.529215 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:06:02.878574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 13:06:02.882401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:03.141054 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:06:03.141151 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:06:03.141445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:03.152181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:03.167620 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-9.scope)... Dec 16 13:06:03.167728 systemd[1]: Reloading... Dec 16 13:06:03.257247 zram_generator::config[2666]: No configuration found. Dec 16 13:06:03.473464 systemd[1]: Reloading finished in 305 ms. Dec 16 13:06:03.506644 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:06:03.506715 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:06:03.506947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:03.508810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:04.160416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:04.168500 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:06:04.203744 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:04.203744 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:06:04.203744 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:04.204037 kubelet[2733]: I1216 13:06:04.203739 2733 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:06:04.314201 update_engine[1690]: I20251216 13:06:04.314132 1690 update_attempter.cc:509] Updating boot flags... Dec 16 13:06:04.621640 kubelet[2733]: I1216 13:06:04.621548 2733 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:06:04.621640 kubelet[2733]: I1216 13:06:04.621571 2733 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:06:04.621968 kubelet[2733]: I1216 13:06:04.621869 2733 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:06:04.655237 kubelet[2733]: E1216 13:06:04.655174 2733 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:06:04.657941 kubelet[2733]: I1216 13:06:04.657914 2733 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:06:04.665653 kubelet[2733]: I1216 13:06:04.665639 2733 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:06:04.668215 kubelet[2733]: I1216 13:06:04.668163 2733 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:06:04.670209 kubelet[2733]: I1216 13:06:04.670169 2733 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:06:04.670374 kubelet[2733]: I1216 13:06:04.670206 2733 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-d193ee2782","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:06:04.670478 kubelet[2733]: I1216 13:06:04.670380 2733 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:06:04.670478 kubelet[2733]: I1216 13:06:04.670389 2733 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:06:04.670530 kubelet[2733]: I1216 13:06:04.670499 2733 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:04.673747 kubelet[2733]: I1216 13:06:04.673728 2733 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:06:04.673816 kubelet[2733]: I1216 13:06:04.673755 2733 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:06:04.673816 kubelet[2733]: I1216 13:06:04.673777 2733 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:06:04.673816 kubelet[2733]: I1216 13:06:04.673788 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:06:04.682710 kubelet[2733]: W1216 13:06:04.681930 2733 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.0.34:6443: connect: connection refused Dec 16 13:06:04.682710 kubelet[2733]: E1216 13:06:04.682001 2733 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:06:04.682710 kubelet[2733]: I1216 13:06:04.682113 2733 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:06:04.682710 kubelet[2733]: I1216 13:06:04.682571 2733 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:06:04.683228 kubelet[2733]: W1216 13:06:04.683204 2733 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:06:04.685163 kubelet[2733]: W1216 13:06:04.685097 2733 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-d193ee2782&limit=500&resourceVersion=0": dial tcp 10.200.0.34:6443: connect: connection refused Dec 16 13:06:04.685163 kubelet[2733]: E1216 13:06:04.685140 2733 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-d193ee2782&limit=500&resourceVersion=0\": dial tcp 10.200.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:06:04.686410 kubelet[2733]: I1216 13:06:04.686391 2733 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:06:04.686465 kubelet[2733]: I1216 13:06:04.686432 2733 server.go:1287] "Started kubelet" Dec 16 13:06:04.686555 kubelet[2733]: I1216 13:06:04.686521 2733 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:06:04.687425 kubelet[2733]: I1216 13:06:04.687413 2733 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:06:04.689619 kubelet[2733]: I1216 13:06:04.689491 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:06:04.691611 kubelet[2733]: I1216 13:06:04.691557 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:06:04.691776 kubelet[2733]: I1216 13:06:04.691763 2733 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:06:04.693614 kubelet[2733]: E1216 13:06:04.691905 2733 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-d193ee2782.1881b3ee3b6b18ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-d193ee2782,UID:ci-4459.2.2-a-d193ee2782,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-d193ee2782,},FirstTimestamp:2025-12-16 13:06:04.686407852 +0000 UTC m=+0.514226404,LastTimestamp:2025-12-16 13:06:04.686407852 +0000 UTC m=+0.514226404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-d193ee2782,}" Dec 16 13:06:04.697161 kubelet[2733]: I1216 13:06:04.697012 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:06:04.699548 kubelet[2733]: I1216 13:06:04.699533 2733 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:06:04.699703 kubelet[2733]: E1216 13:06:04.697159 2733 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-d193ee2782\" not found" Dec 16 13:06:04.699985 kubelet[2733]: I1216 13:06:04.699974 2733 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:06:04.700567 kubelet[2733]: I1216 13:06:04.700554 2733 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:06:04.701324 kubelet[2733]: E1216 13:06:04.701301 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-d193ee2782?timeout=10s\": dial tcp 10.200.0.34:6443: connect: connection refused" interval="200ms" Dec 16 13:06:04.705215 kubelet[2733]: I1216 13:06:04.701634 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:06:04.705215 kubelet[2733]: E1216 13:06:04.702660 2733 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:06:04.705215 kubelet[2733]: W1216 13:06:04.702772 2733 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.0.34:6443: connect: connection refused Dec 16 13:06:04.705215 kubelet[2733]: E1216 13:06:04.702814 2733 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:06:04.709990 kubelet[2733]: I1216 13:06:04.709969 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:06:04.711876 kubelet[2733]: I1216 13:06:04.711861 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:06:04.711950 kubelet[2733]: I1216 13:06:04.711944 2733 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:06:04.712004 kubelet[2733]: I1216 13:06:04.711997 2733 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:06:04.712234 kubelet[2733]: I1216 13:06:04.712227 2733 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:06:04.712324 kubelet[2733]: E1216 13:06:04.712307 2733 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:06:04.717483 kubelet[2733]: I1216 13:06:04.717464 2733 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:06:04.717483 kubelet[2733]: I1216 13:06:04.717478 2733 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:06:04.718118 kubelet[2733]: W1216 13:06:04.718081 2733 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.0.34:6443: connect: connection refused Dec 16 13:06:04.718170 kubelet[2733]: E1216 13:06:04.718122 2733 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.34:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:06:04.735682 kubelet[2733]: I1216 13:06:04.735660 2733 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:06:04.735682 kubelet[2733]: I1216 13:06:04.735672 2733 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:06:04.735803 kubelet[2733]: I1216 13:06:04.735724 2733 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:04.743680 kubelet[2733]: I1216 13:06:04.743666 2733 policy_none.go:49] "None policy: Start" Dec 16 13:06:04.743720 kubelet[2733]: I1216 13:06:04.743683 2733 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:06:04.743720 kubelet[2733]: I1216 13:06:04.743693 2733 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:06:04.752132 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:06:04.761484 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:06:04.764018 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:06:04.774867 kubelet[2733]: I1216 13:06:04.774836 2733 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:06:04.774999 kubelet[2733]: I1216 13:06:04.774988 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:06:04.775803 kubelet[2733]: I1216 13:06:04.775003 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:06:04.775803 kubelet[2733]: I1216 13:06:04.775447 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:06:04.776838 kubelet[2733]: E1216 13:06:04.776815 2733 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:06:04.777056 kubelet[2733]: E1216 13:06:04.777042 2733 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-d193ee2782\" not found" Dec 16 13:06:04.823970 systemd[1]: Created slice kubepods-burstable-pod3f0944b0f80a50baf36cd91295a77b95.slice - libcontainer container kubepods-burstable-pod3f0944b0f80a50baf36cd91295a77b95.slice. Dec 16 13:06:04.834781 kubelet[2733]: E1216 13:06:04.834761 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.837068 systemd[1]: Created slice kubepods-burstable-podd80533f28d79d0b622dc9e03aeb374db.slice - libcontainer container kubepods-burstable-podd80533f28d79d0b622dc9e03aeb374db.slice. Dec 16 13:06:04.846078 kubelet[2733]: E1216 13:06:04.846063 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.848128 systemd[1]: Created slice kubepods-burstable-pod50953dce239998c65ba8b71cd2ce3121.slice - libcontainer container kubepods-burstable-pod50953dce239998c65ba8b71cd2ce3121.slice. Dec 16 13:06:04.849787 kubelet[2733]: E1216 13:06:04.849762 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.877141 kubelet[2733]: I1216 13:06:04.877072 2733 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.878067 kubelet[2733]: E1216 13:06:04.878020 2733 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.34:6443/api/v1/nodes\": dial tcp 10.200.0.34:6443: connect: connection refused" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901708 kubelet[2733]: I1216 13:06:04.901688 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f0944b0f80a50baf36cd91295a77b95-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-d193ee2782\" (UID: \"3f0944b0f80a50baf36cd91295a77b95\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901762 kubelet[2733]: I1216 13:06:04.901717 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d80533f28d79d0b622dc9e03aeb374db-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" (UID: \"d80533f28d79d0b622dc9e03aeb374db\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901762 kubelet[2733]: I1216 13:06:04.901736 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901762 kubelet[2733]: I1216 13:06:04.901753 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901843 kubelet[2733]: I1216 13:06:04.901772 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901843 kubelet[2733]: I1216 13:06:04.901797 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d80533f28d79d0b622dc9e03aeb374db-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" (UID: \"d80533f28d79d0b622dc9e03aeb374db\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901843 kubelet[2733]: I1216 13:06:04.901816 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d80533f28d79d0b622dc9e03aeb374db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" (UID: \"d80533f28d79d0b622dc9e03aeb374db\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901843 kubelet[2733]: I1216 13:06:04.901834 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.901927 kubelet[2733]: I1216 13:06:04.901851 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:04.902009 kubelet[2733]: E1216 13:06:04.901976 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-d193ee2782?timeout=10s\": dial tcp 10.200.0.34:6443: connect: connection refused" interval="400ms" Dec 16 13:06:05.080507 kubelet[2733]: I1216 13:06:05.080109 2733 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:05.080507 kubelet[2733]: E1216 13:06:05.080467 2733 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.34:6443/api/v1/nodes\": dial tcp 10.200.0.34:6443: connect: connection refused" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:05.136753 containerd[1714]: time="2025-12-16T13:06:05.136702509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-d193ee2782,Uid:3f0944b0f80a50baf36cd91295a77b95,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:05.147110 containerd[1714]: time="2025-12-16T13:06:05.147078253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-d193ee2782,Uid:d80533f28d79d0b622dc9e03aeb374db,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:05.151368 containerd[1714]: time="2025-12-16T13:06:05.151335195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-d193ee2782,Uid:50953dce239998c65ba8b71cd2ce3121,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:05.204172 containerd[1714]: time="2025-12-16T13:06:05.204137875Z" level=info msg="connecting to shim 0f62dfc87b92b472ff955968946836b2637cec048664a55d44bb079b7d6d1029" address="unix:///run/containerd/s/5065b27df4be9a23017986aba05480beffe382b3917b25b0b036f97a3ec574a8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:05.225226 systemd[1]: Started cri-containerd-0f62dfc87b92b472ff955968946836b2637cec048664a55d44bb079b7d6d1029.scope - libcontainer container 0f62dfc87b92b472ff955968946836b2637cec048664a55d44bb079b7d6d1029. Dec 16 13:06:05.231021 containerd[1714]: time="2025-12-16T13:06:05.230582182Z" level=info msg="connecting to shim 80992bac593c7f3b7f0f5efe012d9897b3fd80be08024c1e9ecc80acefe7b067" address="unix:///run/containerd/s/0f411575f2e1fbd9f142659316d4c8ac1b64882510b5907b610587ecdfb090d3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:05.237833 containerd[1714]: time="2025-12-16T13:06:05.237766551Z" level=info msg="connecting to shim aae3ecb22814baf7f6288b5e3e2ebab0f204318e8f2fd31157789e19730cbc0e" address="unix:///run/containerd/s/a362274bd6e64aab78656a83642abf7b02cc57e283be6d99d9e8597ec8bdd451" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:05.270366 systemd[1]: Started cri-containerd-80992bac593c7f3b7f0f5efe012d9897b3fd80be08024c1e9ecc80acefe7b067.scope - libcontainer container 80992bac593c7f3b7f0f5efe012d9897b3fd80be08024c1e9ecc80acefe7b067. Dec 16 13:06:05.274818 systemd[1]: Started cri-containerd-aae3ecb22814baf7f6288b5e3e2ebab0f204318e8f2fd31157789e19730cbc0e.scope - libcontainer container aae3ecb22814baf7f6288b5e3e2ebab0f204318e8f2fd31157789e19730cbc0e. Dec 16 13:06:05.302887 kubelet[2733]: E1216 13:06:05.302828 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-d193ee2782?timeout=10s\": dial tcp 10.200.0.34:6443: connect: connection refused" interval="800ms" Dec 16 13:06:05.322787 containerd[1714]: time="2025-12-16T13:06:05.322709125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-d193ee2782,Uid:3f0944b0f80a50baf36cd91295a77b95,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f62dfc87b92b472ff955968946836b2637cec048664a55d44bb079b7d6d1029\"" Dec 16 13:06:05.333754 containerd[1714]: time="2025-12-16T13:06:05.333719352Z" level=info msg="CreateContainer within sandbox \"0f62dfc87b92b472ff955968946836b2637cec048664a55d44bb079b7d6d1029\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:06:05.345993 containerd[1714]: time="2025-12-16T13:06:05.345957158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-d193ee2782,Uid:d80533f28d79d0b622dc9e03aeb374db,Namespace:kube-system,Attempt:0,} returns sandbox id \"80992bac593c7f3b7f0f5efe012d9897b3fd80be08024c1e9ecc80acefe7b067\"" Dec 16 13:06:05.348033 containerd[1714]: time="2025-12-16T13:06:05.347536768Z" level=info msg="CreateContainer within sandbox \"80992bac593c7f3b7f0f5efe012d9897b3fd80be08024c1e9ecc80acefe7b067\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:06:05.362182 containerd[1714]: time="2025-12-16T13:06:05.362164724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-d193ee2782,Uid:50953dce239998c65ba8b71cd2ce3121,Namespace:kube-system,Attempt:0,} returns sandbox id \"aae3ecb22814baf7f6288b5e3e2ebab0f204318e8f2fd31157789e19730cbc0e\"" Dec 16 13:06:05.363915 containerd[1714]: time="2025-12-16T13:06:05.363891209Z" level=info msg="CreateContainer within sandbox \"aae3ecb22814baf7f6288b5e3e2ebab0f204318e8f2fd31157789e19730cbc0e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:06:05.372630 containerd[1714]: time="2025-12-16T13:06:05.372598173Z" level=info msg="Container 175ad9e33f10595f348bd6fcc482d3187ffa7e986890909114b87ff49f1ed99e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:05.381565 containerd[1714]: time="2025-12-16T13:06:05.381067970Z" level=info msg="Container 60e3ecb29b4c7026ffb70fa80ec461078cdbdbca25276464e7ad88283c51fadd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:05.408693 containerd[1714]: time="2025-12-16T13:06:05.408618587Z" level=info msg="CreateContainer within sandbox \"0f62dfc87b92b472ff955968946836b2637cec048664a55d44bb079b7d6d1029\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"175ad9e33f10595f348bd6fcc482d3187ffa7e986890909114b87ff49f1ed99e\"" Dec 16 13:06:05.409033 containerd[1714]: time="2025-12-16T13:06:05.409013739Z" level=info msg="StartContainer for \"175ad9e33f10595f348bd6fcc482d3187ffa7e986890909114b87ff49f1ed99e\"" Dec 16 13:06:05.409920 containerd[1714]: time="2025-12-16T13:06:05.409890044Z" level=info msg="connecting to shim 175ad9e33f10595f348bd6fcc482d3187ffa7e986890909114b87ff49f1ed99e" address="unix:///run/containerd/s/5065b27df4be9a23017986aba05480beffe382b3917b25b0b036f97a3ec574a8" protocol=ttrpc version=3 Dec 16 13:06:05.414202 containerd[1714]: time="2025-12-16T13:06:05.413478099Z" level=info msg="Container 97041105dd341b320d9ca4b24ee971d106a6e3b77e1755a6af34cd8be388bea3: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:05.427401 systemd[1]: Started cri-containerd-175ad9e33f10595f348bd6fcc482d3187ffa7e986890909114b87ff49f1ed99e.scope - libcontainer container 175ad9e33f10595f348bd6fcc482d3187ffa7e986890909114b87ff49f1ed99e. Dec 16 13:06:05.429292 containerd[1714]: time="2025-12-16T13:06:05.429270440Z" level=info msg="CreateContainer within sandbox \"80992bac593c7f3b7f0f5efe012d9897b3fd80be08024c1e9ecc80acefe7b067\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60e3ecb29b4c7026ffb70fa80ec461078cdbdbca25276464e7ad88283c51fadd\"" Dec 16 13:06:05.430138 containerd[1714]: time="2025-12-16T13:06:05.430115339Z" level=info msg="StartContainer for \"60e3ecb29b4c7026ffb70fa80ec461078cdbdbca25276464e7ad88283c51fadd\"" Dec 16 13:06:05.431261 containerd[1714]: time="2025-12-16T13:06:05.431155535Z" level=info msg="connecting to shim 60e3ecb29b4c7026ffb70fa80ec461078cdbdbca25276464e7ad88283c51fadd" address="unix:///run/containerd/s/0f411575f2e1fbd9f142659316d4c8ac1b64882510b5907b610587ecdfb090d3" protocol=ttrpc version=3 Dec 16 13:06:05.438041 containerd[1714]: time="2025-12-16T13:06:05.437999977Z" level=info msg="CreateContainer within sandbox \"aae3ecb22814baf7f6288b5e3e2ebab0f204318e8f2fd31157789e19730cbc0e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97041105dd341b320d9ca4b24ee971d106a6e3b77e1755a6af34cd8be388bea3\"" Dec 16 13:06:05.438819 containerd[1714]: time="2025-12-16T13:06:05.438702471Z" level=info msg="StartContainer for \"97041105dd341b320d9ca4b24ee971d106a6e3b77e1755a6af34cd8be388bea3\"" Dec 16 13:06:05.441287 containerd[1714]: time="2025-12-16T13:06:05.441264914Z" level=info msg="connecting to shim 97041105dd341b320d9ca4b24ee971d106a6e3b77e1755a6af34cd8be388bea3" address="unix:///run/containerd/s/a362274bd6e64aab78656a83642abf7b02cc57e283be6d99d9e8597ec8bdd451" protocol=ttrpc version=3 Dec 16 13:06:05.457348 systemd[1]: Started cri-containerd-60e3ecb29b4c7026ffb70fa80ec461078cdbdbca25276464e7ad88283c51fadd.scope - libcontainer container 60e3ecb29b4c7026ffb70fa80ec461078cdbdbca25276464e7ad88283c51fadd. Dec 16 13:06:05.469312 systemd[1]: Started cri-containerd-97041105dd341b320d9ca4b24ee971d106a6e3b77e1755a6af34cd8be388bea3.scope - libcontainer container 97041105dd341b320d9ca4b24ee971d106a6e3b77e1755a6af34cd8be388bea3. Dec 16 13:06:05.483887 kubelet[2733]: I1216 13:06:05.483863 2733 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:05.487025 kubelet[2733]: E1216 13:06:05.486995 2733 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.34:6443/api/v1/nodes\": dial tcp 10.200.0.34:6443: connect: connection refused" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:05.514339 containerd[1714]: time="2025-12-16T13:06:05.514220191Z" level=info msg="StartContainer for \"175ad9e33f10595f348bd6fcc482d3187ffa7e986890909114b87ff49f1ed99e\" returns successfully" Dec 16 13:06:05.553370 containerd[1714]: time="2025-12-16T13:06:05.553338960Z" level=info msg="StartContainer for \"60e3ecb29b4c7026ffb70fa80ec461078cdbdbca25276464e7ad88283c51fadd\" returns successfully" Dec 16 13:06:05.558013 containerd[1714]: time="2025-12-16T13:06:05.557989728Z" level=info msg="StartContainer for \"97041105dd341b320d9ca4b24ee971d106a6e3b77e1755a6af34cd8be388bea3\" returns successfully" Dec 16 13:06:05.742616 kubelet[2733]: E1216 13:06:05.742345 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:05.743998 kubelet[2733]: E1216 13:06:05.743979 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:05.748719 kubelet[2733]: E1216 13:06:05.748615 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:06.291954 kubelet[2733]: I1216 13:06:06.291394 2733 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:06.751212 kubelet[2733]: E1216 13:06:06.751162 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:06.752462 kubelet[2733]: E1216 13:06:06.752301 2733 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.203437 kubelet[2733]: E1216 13:06:07.203394 2733 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-d193ee2782\" not found" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.298598 kubelet[2733]: I1216 13:06:07.298409 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.298772 kubelet[2733]: I1216 13:06:07.298764 2733 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.346778 kubelet[2733]: E1216 13:06:07.346744 2733 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.346778 kubelet[2733]: I1216 13:06:07.346775 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.349409 kubelet[2733]: E1216 13:06:07.349385 2733 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-d193ee2782\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.349409 kubelet[2733]: I1216 13:06:07.349406 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.355535 kubelet[2733]: E1216 13:06:07.355502 2733 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:07.677989 kubelet[2733]: I1216 13:06:07.677949 2733 apiserver.go:52] "Watching apiserver" Dec 16 13:06:07.700694 kubelet[2733]: I1216 13:06:07.700663 2733 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:06:08.275562 kubelet[2733]: I1216 13:06:08.275534 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:08.283965 kubelet[2733]: W1216 13:06:08.283937 2733 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:06:09.105464 systemd[1]: Reload requested from client PID 3037 ('systemctl') (unit session-9.scope)... Dec 16 13:06:09.105478 systemd[1]: Reloading... Dec 16 13:06:09.175230 zram_generator::config[3080]: No configuration found. Dec 16 13:06:09.338522 kubelet[2733]: I1216 13:06:09.338481 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:09.354659 kubelet[2733]: W1216 13:06:09.354427 2733 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:06:09.377360 systemd[1]: Reloading finished in 271 ms. Dec 16 13:06:09.403813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:09.422024 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:06:09.422275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:09.422323 systemd[1]: kubelet.service: Consumed 733ms CPU time, 131.5M memory peak. Dec 16 13:06:09.423689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:09.921260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:09.925279 (kubelet)[3151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:06:09.964702 kubelet[3151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:09.964702 kubelet[3151]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:06:09.964702 kubelet[3151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:09.964964 kubelet[3151]: I1216 13:06:09.964725 3151 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:06:09.970567 kubelet[3151]: I1216 13:06:09.970545 3151 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:06:09.970567 kubelet[3151]: I1216 13:06:09.970563 3151 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:06:09.970770 kubelet[3151]: I1216 13:06:09.970758 3151 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:06:09.971853 kubelet[3151]: I1216 13:06:09.971834 3151 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:06:09.973553 kubelet[3151]: I1216 13:06:09.973532 3151 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:06:09.978334 kubelet[3151]: I1216 13:06:09.978317 3151 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:06:09.981540 kubelet[3151]: I1216 13:06:09.981520 3151 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:06:09.981862 kubelet[3151]: I1216 13:06:09.981835 3151 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:06:09.982179 kubelet[3151]: I1216 13:06:09.981923 3151 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-d193ee2782","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:06:09.982354 kubelet[3151]: I1216 13:06:09.982344 3151 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:06:09.982407 kubelet[3151]: I1216 13:06:09.982400 3151 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:06:09.982498 kubelet[3151]: I1216 13:06:09.982491 3151 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:09.982689 kubelet[3151]: I1216 13:06:09.982680 3151 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:06:09.982752 kubelet[3151]: I1216 13:06:09.982746 3151 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:06:09.982834 kubelet[3151]: I1216 13:06:09.982828 3151 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:06:09.982909 kubelet[3151]: I1216 13:06:09.982881 3151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:06:09.989715 kubelet[3151]: I1216 13:06:09.988545 3151 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:06:09.989715 kubelet[3151]: I1216 13:06:09.988911 3151 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:06:09.989715 kubelet[3151]: I1216 13:06:09.989293 3151 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:06:09.989715 kubelet[3151]: I1216 13:06:09.989318 3151 server.go:1287] "Started kubelet" Dec 16 13:06:09.995105 kubelet[3151]: I1216 13:06:09.995088 3151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:06:10.001122 kubelet[3151]: I1216 13:06:10.001091 3151 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:06:10.001999 kubelet[3151]: I1216 13:06:10.001977 3151 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:06:10.002896 kubelet[3151]: I1216 13:06:10.002849 3151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:06:10.003033 kubelet[3151]: I1216 13:06:10.003018 3151 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:06:10.003264 kubelet[3151]: I1216 13:06:10.003251 3151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:06:10.004500 kubelet[3151]: I1216 13:06:10.004484 3151 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:06:10.004667 kubelet[3151]: E1216 13:06:10.004653 3151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-d193ee2782\" not found" Dec 16 13:06:10.006209 kubelet[3151]: I1216 13:06:10.006168 3151 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:06:10.006305 kubelet[3151]: I1216 13:06:10.006291 3151 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:06:10.008085 kubelet[3151]: I1216 13:06:10.008050 3151 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:06:10.010223 kubelet[3151]: I1216 13:06:10.008129 3151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:06:10.010223 kubelet[3151]: I1216 13:06:10.008268 3151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:06:10.010223 kubelet[3151]: I1216 13:06:10.009264 3151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:06:10.010223 kubelet[3151]: I1216 13:06:10.009288 3151 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:06:10.010223 kubelet[3151]: I1216 13:06:10.009304 3151 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:06:10.010223 kubelet[3151]: I1216 13:06:10.009310 3151 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:06:10.010223 kubelet[3151]: E1216 13:06:10.009346 3151 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:06:10.018634 kubelet[3151]: E1216 13:06:10.018414 3151 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:06:10.020415 kubelet[3151]: I1216 13:06:10.020395 3151 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:06:10.063161 kubelet[3151]: I1216 13:06:10.063147 3151 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:06:10.063261 kubelet[3151]: I1216 13:06:10.063255 3151 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:06:10.063296 kubelet[3151]: I1216 13:06:10.063293 3151 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:10.063427 kubelet[3151]: I1216 13:06:10.063422 3151 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:06:10.063463 kubelet[3151]: I1216 13:06:10.063452 3151 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:06:10.063487 kubelet[3151]: I1216 13:06:10.063484 3151 policy_none.go:49] "None policy: Start" Dec 16 13:06:10.063517 kubelet[3151]: I1216 13:06:10.063513 3151 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:06:10.063541 kubelet[3151]: I1216 13:06:10.063538 3151 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:06:10.063627 kubelet[3151]: I1216 13:06:10.063623 3151 state_mem.go:75] "Updated machine memory state" Dec 16 13:06:10.066748 kubelet[3151]: I1216 13:06:10.066730 3151 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:06:10.066892 kubelet[3151]: I1216 13:06:10.066882 3151 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:06:10.066925 kubelet[3151]: I1216 13:06:10.066895 3151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:06:10.067358 kubelet[3151]: I1216 13:06:10.067085 3151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:06:10.068727 kubelet[3151]: E1216 13:06:10.068709 3151 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:06:10.110048 kubelet[3151]: I1216 13:06:10.110022 3151 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.110417 kubelet[3151]: I1216 13:06:10.110400 3151 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.110649 kubelet[3151]: I1216 13:06:10.110638 3151 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.118256 kubelet[3151]: W1216 13:06:10.118233 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:06:10.122713 kubelet[3151]: W1216 13:06:10.122643 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:06:10.122813 kubelet[3151]: E1216 13:06:10.122800 3151 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.123008 kubelet[3151]: W1216 13:06:10.123000 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:06:10.123096 kubelet[3151]: E1216 13:06:10.123087 3151 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.123870 sudo[3186]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:06:10.124119 sudo[3186]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:06:10.169721 kubelet[3151]: I1216 13:06:10.169689 3151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.184126 kubelet[3151]: I1216 13:06:10.183891 3151 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.184126 kubelet[3151]: I1216 13:06:10.183948 3151 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307158 kubelet[3151]: I1216 13:06:10.307120 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307158 kubelet[3151]: I1216 13:06:10.307166 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307158 kubelet[3151]: I1216 13:06:10.307204 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307966 kubelet[3151]: I1216 13:06:10.307223 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f0944b0f80a50baf36cd91295a77b95-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-d193ee2782\" (UID: \"3f0944b0f80a50baf36cd91295a77b95\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307966 kubelet[3151]: I1216 13:06:10.307239 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d80533f28d79d0b622dc9e03aeb374db-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" (UID: \"d80533f28d79d0b622dc9e03aeb374db\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307966 kubelet[3151]: I1216 13:06:10.307255 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307966 kubelet[3151]: I1216 13:06:10.307274 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50953dce239998c65ba8b71cd2ce3121-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-d193ee2782\" (UID: \"50953dce239998c65ba8b71cd2ce3121\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.307966 kubelet[3151]: I1216 13:06:10.307293 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d80533f28d79d0b622dc9e03aeb374db-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" (UID: \"d80533f28d79d0b622dc9e03aeb374db\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.308062 kubelet[3151]: I1216 13:06:10.307311 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d80533f28d79d0b622dc9e03aeb374db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-d193ee2782\" (UID: \"d80533f28d79d0b622dc9e03aeb374db\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" Dec 16 13:06:10.473899 sudo[3186]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:10.991720 kubelet[3151]: I1216 13:06:10.991691 3151 apiserver.go:52] "Watching apiserver" Dec 16 13:06:11.007125 kubelet[3151]: I1216 13:06:11.007093 3151 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:06:11.072589 kubelet[3151]: I1216 13:06:11.072524 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-d193ee2782" podStartSLOduration=1.072507675 podStartE2EDuration="1.072507675s" podCreationTimestamp="2025-12-16 13:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:11.063149675 +0000 UTC m=+1.134328970" watchObservedRunningTime="2025-12-16 13:06:11.072507675 +0000 UTC m=+1.143686973" Dec 16 13:06:11.093074 kubelet[3151]: I1216 13:06:11.092778 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-d193ee2782" podStartSLOduration=2.092763062 podStartE2EDuration="2.092763062s" podCreationTimestamp="2025-12-16 13:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:11.072629097 +0000 UTC m=+1.143808389" watchObservedRunningTime="2025-12-16 13:06:11.092763062 +0000 UTC m=+1.163942360" Dec 16 13:06:11.103179 kubelet[3151]: I1216 13:06:11.103072 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-d193ee2782" podStartSLOduration=3.103059472 podStartE2EDuration="3.103059472s" podCreationTimestamp="2025-12-16 13:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:11.092727734 +0000 UTC m=+1.163907031" watchObservedRunningTime="2025-12-16 13:06:11.103059472 +0000 UTC m=+1.174238764" Dec 16 13:06:11.682106 sudo[2162]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:11.772512 sshd[2161]: Connection closed by 10.200.16.10 port 50594 Dec 16 13:06:11.773036 sshd-session[2158]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:11.777014 systemd[1]: sshd@6-10.200.0.34:22-10.200.16.10:50594.service: Deactivated successfully. Dec 16 13:06:11.779750 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:06:11.780122 systemd[1]: session-9.scope: Consumed 3.118s CPU time, 269.1M memory peak. Dec 16 13:06:11.781501 systemd-logind[1688]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:06:11.782537 systemd-logind[1688]: Removed session 9. Dec 16 13:06:14.853145 kubelet[3151]: I1216 13:06:14.853115 3151 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:06:14.853647 kubelet[3151]: I1216 13:06:14.853587 3151 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:06:14.853680 containerd[1714]: time="2025-12-16T13:06:14.853400642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:06:15.869578 systemd[1]: Created slice kubepods-besteffort-pod075fcafd_7bc1_419a_be1b_742a7eb09c4d.slice - libcontainer container kubepods-besteffort-pod075fcafd_7bc1_419a_be1b_742a7eb09c4d.slice. Dec 16 13:06:15.881215 systemd[1]: Created slice kubepods-burstable-pod8a139832_356c_4e8f_99a9_988f7b0dc6a7.slice - libcontainer container kubepods-burstable-pod8a139832_356c_4e8f_99a9_988f7b0dc6a7.slice. Dec 16 13:06:15.938181 kubelet[3151]: I1216 13:06:15.938143 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-run\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938181 kubelet[3151]: I1216 13:06:15.938177 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-config-path\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938544 kubelet[3151]: I1216 13:06:15.938209 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-cgroup\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938544 kubelet[3151]: I1216 13:06:15.938225 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a139832-356c-4e8f-99a9-988f7b0dc6a7-clustermesh-secrets\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938544 kubelet[3151]: I1216 13:06:15.938241 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gl7z\" (UniqueName: \"kubernetes.io/projected/075fcafd-7bc1-419a-be1b-742a7eb09c4d-kube-api-access-2gl7z\") pod \"kube-proxy-xx4b8\" (UID: \"075fcafd-7bc1-419a-be1b-742a7eb09c4d\") " pod="kube-system/kube-proxy-xx4b8" Dec 16 13:06:15.938544 kubelet[3151]: I1216 13:06:15.938261 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hostproc\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938544 kubelet[3151]: I1216 13:06:15.938278 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-etc-cni-netd\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938544 kubelet[3151]: I1216 13:06:15.938295 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-lib-modules\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938675 kubelet[3151]: I1216 13:06:15.938310 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-kernel\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938675 kubelet[3151]: I1216 13:06:15.938327 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzw7\" (UniqueName: \"kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-kube-api-access-2jzw7\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938675 kubelet[3151]: I1216 13:06:15.938347 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hubble-tls\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938675 kubelet[3151]: I1216 13:06:15.938366 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/075fcafd-7bc1-419a-be1b-742a7eb09c4d-xtables-lock\") pod \"kube-proxy-xx4b8\" (UID: \"075fcafd-7bc1-419a-be1b-742a7eb09c4d\") " pod="kube-system/kube-proxy-xx4b8" Dec 16 13:06:15.938675 kubelet[3151]: I1216 13:06:15.938383 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/075fcafd-7bc1-419a-be1b-742a7eb09c4d-lib-modules\") pod \"kube-proxy-xx4b8\" (UID: \"075fcafd-7bc1-419a-be1b-742a7eb09c4d\") " pod="kube-system/kube-proxy-xx4b8" Dec 16 13:06:15.938675 kubelet[3151]: I1216 13:06:15.938398 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cni-path\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938773 kubelet[3151]: I1216 13:06:15.938414 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-net\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938773 kubelet[3151]: I1216 13:06:15.938430 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/075fcafd-7bc1-419a-be1b-742a7eb09c4d-kube-proxy\") pod \"kube-proxy-xx4b8\" (UID: \"075fcafd-7bc1-419a-be1b-742a7eb09c4d\") " pod="kube-system/kube-proxy-xx4b8" Dec 16 13:06:15.938773 kubelet[3151]: I1216 13:06:15.938445 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-bpf-maps\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:15.938773 kubelet[3151]: I1216 13:06:15.938470 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-xtables-lock\") pod \"cilium-mqqkq\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " pod="kube-system/cilium-mqqkq" Dec 16 13:06:16.010874 systemd[1]: Created slice kubepods-besteffort-pod51605413_c1a6_459b_94a7_1f8fb1595ace.slice - libcontainer container kubepods-besteffort-pod51605413_c1a6_459b_94a7_1f8fb1595ace.slice. Dec 16 13:06:16.039184 kubelet[3151]: I1216 13:06:16.039075 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51605413-c1a6-459b-94a7-1f8fb1595ace-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g4chx\" (UID: \"51605413-c1a6-459b-94a7-1f8fb1595ace\") " pod="kube-system/cilium-operator-6c4d7847fc-g4chx" Dec 16 13:06:16.044248 kubelet[3151]: I1216 13:06:16.044112 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xp9w\" (UniqueName: \"kubernetes.io/projected/51605413-c1a6-459b-94a7-1f8fb1595ace-kube-api-access-2xp9w\") pod \"cilium-operator-6c4d7847fc-g4chx\" (UID: \"51605413-c1a6-459b-94a7-1f8fb1595ace\") " pod="kube-system/cilium-operator-6c4d7847fc-g4chx" Dec 16 13:06:16.178396 containerd[1714]: time="2025-12-16T13:06:16.178355314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx4b8,Uid:075fcafd-7bc1-419a-be1b-742a7eb09c4d,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:16.184143 containerd[1714]: time="2025-12-16T13:06:16.184112739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqqkq,Uid:8a139832-356c-4e8f-99a9-988f7b0dc6a7,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:16.222617 containerd[1714]: time="2025-12-16T13:06:16.222582578Z" level=info msg="connecting to shim 5a333fcfee486685af8c8de4476d7f48fccf6604c1cef2e2692652ecaae397f1" address="unix:///run/containerd/s/c69214a2eb9d4ffa926458696b0361093784ea76656a58d2f85976524fe2e11c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:16.244755 containerd[1714]: time="2025-12-16T13:06:16.244691769Z" level=info msg="connecting to shim a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876" address="unix:///run/containerd/s/76c7d92d4240b3cef84ff36324ff40c4e12117a4cfbe0b4e96b1a99d8718b5ab" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:16.248443 systemd[1]: Started cri-containerd-5a333fcfee486685af8c8de4476d7f48fccf6604c1cef2e2692652ecaae397f1.scope - libcontainer container 5a333fcfee486685af8c8de4476d7f48fccf6604c1cef2e2692652ecaae397f1. Dec 16 13:06:16.270511 systemd[1]: Started cri-containerd-a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876.scope - libcontainer container a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876. Dec 16 13:06:16.282677 containerd[1714]: time="2025-12-16T13:06:16.282632582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx4b8,Uid:075fcafd-7bc1-419a-be1b-742a7eb09c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a333fcfee486685af8c8de4476d7f48fccf6604c1cef2e2692652ecaae397f1\"" Dec 16 13:06:16.287126 containerd[1714]: time="2025-12-16T13:06:16.287102940Z" level=info msg="CreateContainer within sandbox \"5a333fcfee486685af8c8de4476d7f48fccf6604c1cef2e2692652ecaae397f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:06:16.313084 containerd[1714]: time="2025-12-16T13:06:16.313057105Z" level=info msg="Container 00b3043386a600ed0869a666ef8dddb0c0462cf29ba8333ea9b05d8d4ad61e12: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:16.315773 containerd[1714]: time="2025-12-16T13:06:16.315724521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g4chx,Uid:51605413-c1a6-459b-94a7-1f8fb1595ace,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:16.318477 containerd[1714]: time="2025-12-16T13:06:16.318451975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqqkq,Uid:8a139832-356c-4e8f-99a9-988f7b0dc6a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\"" Dec 16 13:06:16.319686 containerd[1714]: time="2025-12-16T13:06:16.319637572Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:06:16.341125 containerd[1714]: time="2025-12-16T13:06:16.341100553Z" level=info msg="CreateContainer within sandbox \"5a333fcfee486685af8c8de4476d7f48fccf6604c1cef2e2692652ecaae397f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00b3043386a600ed0869a666ef8dddb0c0462cf29ba8333ea9b05d8d4ad61e12\"" Dec 16 13:06:16.341544 containerd[1714]: time="2025-12-16T13:06:16.341523964Z" level=info msg="StartContainer for \"00b3043386a600ed0869a666ef8dddb0c0462cf29ba8333ea9b05d8d4ad61e12\"" Dec 16 13:06:16.343790 containerd[1714]: time="2025-12-16T13:06:16.343757128Z" level=info msg="connecting to shim 00b3043386a600ed0869a666ef8dddb0c0462cf29ba8333ea9b05d8d4ad61e12" address="unix:///run/containerd/s/c69214a2eb9d4ffa926458696b0361093784ea76656a58d2f85976524fe2e11c" protocol=ttrpc version=3 Dec 16 13:06:16.364516 systemd[1]: Started cri-containerd-00b3043386a600ed0869a666ef8dddb0c0462cf29ba8333ea9b05d8d4ad61e12.scope - libcontainer container 00b3043386a600ed0869a666ef8dddb0c0462cf29ba8333ea9b05d8d4ad61e12. Dec 16 13:06:16.373163 containerd[1714]: time="2025-12-16T13:06:16.372721991Z" level=info msg="connecting to shim b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705" address="unix:///run/containerd/s/350bbf98724e3c78fdc02b4894abf4f12c793223346db104dcae7ade912b6794" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:16.392306 systemd[1]: Started cri-containerd-b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705.scope - libcontainer container b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705. Dec 16 13:06:16.425709 containerd[1714]: time="2025-12-16T13:06:16.425686002Z" level=info msg="StartContainer for \"00b3043386a600ed0869a666ef8dddb0c0462cf29ba8333ea9b05d8d4ad61e12\" returns successfully" Dec 16 13:06:16.456056 containerd[1714]: time="2025-12-16T13:06:16.455935681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g4chx,Uid:51605413-c1a6-459b-94a7-1f8fb1595ace,Namespace:kube-system,Attempt:0,} returns sandbox id \"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\"" Dec 16 13:06:17.073999 kubelet[3151]: I1216 13:06:17.073939 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xx4b8" podStartSLOduration=2.073922676 podStartE2EDuration="2.073922676s" podCreationTimestamp="2025-12-16 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:17.073746888 +0000 UTC m=+7.144926188" watchObservedRunningTime="2025-12-16 13:06:17.073922676 +0000 UTC m=+7.145101976" Dec 16 13:06:25.616799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522980212.mount: Deactivated successfully. Dec 16 13:06:27.220911 containerd[1714]: time="2025-12-16T13:06:27.220860840Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:27.224161 containerd[1714]: time="2025-12-16T13:06:27.224120841Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:06:27.228068 containerd[1714]: time="2025-12-16T13:06:27.228028701Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:27.229331 containerd[1714]: time="2025-12-16T13:06:27.229116525Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.90944609s" Dec 16 13:06:27.229331 containerd[1714]: time="2025-12-16T13:06:27.229150568Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:06:27.230251 containerd[1714]: time="2025-12-16T13:06:27.230224515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:06:27.232037 containerd[1714]: time="2025-12-16T13:06:27.231539451Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:06:27.261755 containerd[1714]: time="2025-12-16T13:06:27.261730690Z" level=info msg="Container b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:27.288041 containerd[1714]: time="2025-12-16T13:06:27.288004883Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\"" Dec 16 13:06:27.288703 containerd[1714]: time="2025-12-16T13:06:27.288435560Z" level=info msg="StartContainer for \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\"" Dec 16 13:06:27.289461 containerd[1714]: time="2025-12-16T13:06:27.289430072Z" level=info msg="connecting to shim b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b" address="unix:///run/containerd/s/76c7d92d4240b3cef84ff36324ff40c4e12117a4cfbe0b4e96b1a99d8718b5ab" protocol=ttrpc version=3 Dec 16 13:06:27.308347 systemd[1]: Started cri-containerd-b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b.scope - libcontainer container b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b. Dec 16 13:06:27.335577 containerd[1714]: time="2025-12-16T13:06:27.335488519Z" level=info msg="StartContainer for \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" returns successfully" Dec 16 13:06:27.343003 systemd[1]: cri-containerd-b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b.scope: Deactivated successfully. Dec 16 13:06:27.345897 containerd[1714]: time="2025-12-16T13:06:27.345852998Z" level=info msg="received container exit event container_id:\"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" id:\"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" pid:3566 exited_at:{seconds:1765890387 nanos:345507583}" Dec 16 13:06:27.361994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b-rootfs.mount: Deactivated successfully. Dec 16 13:06:37.346697 containerd[1714]: time="2025-12-16T13:06:37.346656392Z" level=error msg="failed to handle container TaskExit event container_id:\"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" id:\"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" pid:3566 exited_at:{seconds:1765890387 nanos:345507583}" error="failed to stop container: failed to delete task: context deadline exceeded" Dec 16 13:06:38.632383 containerd[1714]: time="2025-12-16T13:06:38.632321359Z" level=info msg="TaskExit event container_id:\"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" id:\"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" pid:3566 exited_at:{seconds:1765890387 nanos:345507583}" Dec 16 13:06:40.633029 containerd[1714]: time="2025-12-16T13:06:40.632972863Z" level=error msg="get state for b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b" error="context deadline exceeded" Dec 16 13:06:40.633029 containerd[1714]: time="2025-12-16T13:06:40.633003556Z" level=warning msg="unknown status" status=0 Dec 16 13:06:42.634298 containerd[1714]: time="2025-12-16T13:06:42.634180187Z" level=error msg="get state for b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b" error="context deadline exceeded" Dec 16 13:06:42.634298 containerd[1714]: time="2025-12-16T13:06:42.634234359Z" level=warning msg="unknown status" status=0 Dec 16 13:06:44.635078 containerd[1714]: time="2025-12-16T13:06:44.635020995Z" level=error msg="get state for b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b" error="context deadline exceeded" Dec 16 13:06:44.635078 containerd[1714]: time="2025-12-16T13:06:44.635067349Z" level=warning msg="unknown status" status=0 Dec 16 13:06:46.240430 containerd[1714]: time="2025-12-16T13:06:46.240240717Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Dec 16 13:06:46.240430 containerd[1714]: time="2025-12-16T13:06:46.240308326Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Dec 16 13:06:46.240430 containerd[1714]: time="2025-12-16T13:06:46.240318459Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Dec 16 13:06:46.240430 containerd[1714]: time="2025-12-16T13:06:46.240326372Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Dec 16 13:06:46.241322 containerd[1714]: time="2025-12-16T13:06:46.241284119Z" level=info msg="Ensure that container b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b in task-service has been cleanup successfully" Dec 16 13:06:47.114782 containerd[1714]: time="2025-12-16T13:06:47.114736087Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:06:47.449358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210461551.mount: Deactivated successfully. Dec 16 13:06:47.932840 containerd[1714]: time="2025-12-16T13:06:47.932799972Z" level=info msg="Container f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:48.479205 containerd[1714]: time="2025-12-16T13:06:48.479163244Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\"" Dec 16 13:06:48.479852 containerd[1714]: time="2025-12-16T13:06:48.479823908Z" level=info msg="StartContainer for \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\"" Dec 16 13:06:48.480820 containerd[1714]: time="2025-12-16T13:06:48.480774926Z" level=info msg="connecting to shim f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814" address="unix:///run/containerd/s/76c7d92d4240b3cef84ff36324ff40c4e12117a4cfbe0b4e96b1a99d8718b5ab" protocol=ttrpc version=3 Dec 16 13:06:48.502350 systemd[1]: Started cri-containerd-f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814.scope - libcontainer container f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814. Dec 16 13:06:48.547982 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:06:48.548636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:48.548878 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:48.552243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:48.554023 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:06:48.554963 systemd[1]: cri-containerd-f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814.scope: Deactivated successfully. Dec 16 13:06:48.568409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:48.589468 containerd[1714]: time="2025-12-16T13:06:48.589435818Z" level=info msg="received container exit event container_id:\"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\" id:\"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\" pid:3619 exited_at:{seconds:1765890408 nanos:560467165}" Dec 16 13:06:48.596598 containerd[1714]: time="2025-12-16T13:06:48.596574466Z" level=info msg="StartContainer for \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\" returns successfully" Dec 16 13:06:48.612084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814-rootfs.mount: Deactivated successfully. Dec 16 13:06:49.673929 containerd[1714]: time="2025-12-16T13:06:49.673891637Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:49.676684 containerd[1714]: time="2025-12-16T13:06:49.676655243Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:06:49.680714 containerd[1714]: time="2025-12-16T13:06:49.680672489Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:49.681653 containerd[1714]: time="2025-12-16T13:06:49.681552086Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 22.45129198s" Dec 16 13:06:49.681653 containerd[1714]: time="2025-12-16T13:06:49.681583037Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:06:49.684213 containerd[1714]: time="2025-12-16T13:06:49.683489917Z" level=info msg="CreateContainer within sandbox \"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:06:49.702313 containerd[1714]: time="2025-12-16T13:06:49.702286622Z" level=info msg="Container 746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:49.726283 containerd[1714]: time="2025-12-16T13:06:49.726255557Z" level=info msg="CreateContainer within sandbox \"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\"" Dec 16 13:06:49.727220 containerd[1714]: time="2025-12-16T13:06:49.726660574Z" level=info msg="StartContainer for \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\"" Dec 16 13:06:49.728219 containerd[1714]: time="2025-12-16T13:06:49.728175312Z" level=info msg="connecting to shim 746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10" address="unix:///run/containerd/s/350bbf98724e3c78fdc02b4894abf4f12c793223346db104dcae7ade912b6794" protocol=ttrpc version=3 Dec 16 13:06:49.749356 systemd[1]: Started cri-containerd-746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10.scope - libcontainer container 746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10. Dec 16 13:06:49.775464 containerd[1714]: time="2025-12-16T13:06:49.775402546Z" level=info msg="StartContainer for \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" returns successfully" Dec 16 13:06:50.131022 containerd[1714]: time="2025-12-16T13:06:50.130986373Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:06:50.161864 kubelet[3151]: I1216 13:06:50.161499 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g4chx" podStartSLOduration=1.9368301570000002 podStartE2EDuration="35.161483248s" podCreationTimestamp="2025-12-16 13:06:15 +0000 UTC" firstStartedPulling="2025-12-16 13:06:16.457517757 +0000 UTC m=+6.528697046" lastFinishedPulling="2025-12-16 13:06:49.682170856 +0000 UTC m=+39.753350137" observedRunningTime="2025-12-16 13:06:50.160332222 +0000 UTC m=+40.231511521" watchObservedRunningTime="2025-12-16 13:06:50.161483248 +0000 UTC m=+40.232662544" Dec 16 13:06:50.168052 containerd[1714]: time="2025-12-16T13:06:50.166393218Z" level=info msg="Container 8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:50.167387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535251431.mount: Deactivated successfully. Dec 16 13:06:50.196278 containerd[1714]: time="2025-12-16T13:06:50.196226867Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\"" Dec 16 13:06:50.199205 containerd[1714]: time="2025-12-16T13:06:50.198495300Z" level=info msg="StartContainer for \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\"" Dec 16 13:06:50.201365 containerd[1714]: time="2025-12-16T13:06:50.201314787Z" level=info msg="connecting to shim 8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589" address="unix:///run/containerd/s/76c7d92d4240b3cef84ff36324ff40c4e12117a4cfbe0b4e96b1a99d8718b5ab" protocol=ttrpc version=3 Dec 16 13:06:50.231352 systemd[1]: Started cri-containerd-8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589.scope - libcontainer container 8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589. Dec 16 13:06:50.361027 containerd[1714]: time="2025-12-16T13:06:50.360983179Z" level=info msg="StartContainer for \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\" returns successfully" Dec 16 13:06:50.376592 systemd[1]: cri-containerd-8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589.scope: Deactivated successfully. Dec 16 13:06:50.380162 containerd[1714]: time="2025-12-16T13:06:50.380114948Z" level=info msg="received container exit event container_id:\"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\" id:\"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\" pid:3714 exited_at:{seconds:1765890410 nanos:379712386}" Dec 16 13:06:51.137097 containerd[1714]: time="2025-12-16T13:06:51.137050681Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:06:51.168774 containerd[1714]: time="2025-12-16T13:06:51.168725280Z" level=info msg="Container a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:51.184455 containerd[1714]: time="2025-12-16T13:06:51.184427220Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\"" Dec 16 13:06:51.184833 containerd[1714]: time="2025-12-16T13:06:51.184774151Z" level=info msg="StartContainer for \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\"" Dec 16 13:06:51.185889 containerd[1714]: time="2025-12-16T13:06:51.185863359Z" level=info msg="connecting to shim a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917" address="unix:///run/containerd/s/76c7d92d4240b3cef84ff36324ff40c4e12117a4cfbe0b4e96b1a99d8718b5ab" protocol=ttrpc version=3 Dec 16 13:06:51.207320 systemd[1]: Started cri-containerd-a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917.scope - libcontainer container a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917. Dec 16 13:06:51.233036 systemd[1]: cri-containerd-a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917.scope: Deactivated successfully. Dec 16 13:06:51.236281 containerd[1714]: time="2025-12-16T13:06:51.236183682Z" level=info msg="received container exit event container_id:\"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\" id:\"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\" pid:3753 exited_at:{seconds:1765890411 nanos:232732595}" Dec 16 13:06:51.253280 containerd[1714]: time="2025-12-16T13:06:51.253181024Z" level=info msg="StartContainer for \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\" returns successfully" Dec 16 13:06:51.266750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917-rootfs.mount: Deactivated successfully. Dec 16 13:06:52.149502 containerd[1714]: time="2025-12-16T13:06:52.149433149Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:06:52.179928 containerd[1714]: time="2025-12-16T13:06:52.179892072Z" level=info msg="Container d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:52.183979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130628822.mount: Deactivated successfully. Dec 16 13:06:52.195852 containerd[1714]: time="2025-12-16T13:06:52.195823286Z" level=info msg="CreateContainer within sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\"" Dec 16 13:06:52.196468 containerd[1714]: time="2025-12-16T13:06:52.196443339Z" level=info msg="StartContainer for \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\"" Dec 16 13:06:52.197506 containerd[1714]: time="2025-12-16T13:06:52.197481392Z" level=info msg="connecting to shim d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9" address="unix:///run/containerd/s/76c7d92d4240b3cef84ff36324ff40c4e12117a4cfbe0b4e96b1a99d8718b5ab" protocol=ttrpc version=3 Dec 16 13:06:52.219349 systemd[1]: Started cri-containerd-d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9.scope - libcontainer container d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9. Dec 16 13:06:52.257519 containerd[1714]: time="2025-12-16T13:06:52.257487438Z" level=info msg="StartContainer for \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" returns successfully" Dec 16 13:06:52.414429 kubelet[3151]: I1216 13:06:52.414333 3151 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:06:52.457156 kubelet[3151]: W1216 13:06:52.456662 3151 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4459.2.2-a-d193ee2782" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-a-d193ee2782' and this object Dec 16 13:06:52.457156 kubelet[3151]: I1216 13:06:52.456594 3151 status_manager.go:890] "Failed to get status for pod" podUID="b5c791f2-e56b-4246-a7ad-7bf632fe5966" pod="kube-system/coredns-668d6bf9bc-s4nw4" err="pods \"coredns-668d6bf9bc-s4nw4\" is forbidden: User \"system:node:ci-4459.2.2-a-d193ee2782\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-d193ee2782' and this object" Dec 16 13:06:52.457156 kubelet[3151]: E1216 13:06:52.456715 3151 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4459.2.2-a-d193ee2782\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-d193ee2782' and this object" logger="UnhandledError" Dec 16 13:06:52.463556 systemd[1]: Created slice kubepods-burstable-podb5c791f2_e56b_4246_a7ad_7bf632fe5966.slice - libcontainer container kubepods-burstable-podb5c791f2_e56b_4246_a7ad_7bf632fe5966.slice. Dec 16 13:06:52.478944 systemd[1]: Created slice kubepods-burstable-pod399ccc54_212d_4af6_87a6_9294a2c74302.slice - libcontainer container kubepods-burstable-pod399ccc54_212d_4af6_87a6_9294a2c74302.slice. Dec 16 13:06:52.549983 kubelet[3151]: I1216 13:06:52.549924 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5c791f2-e56b-4246-a7ad-7bf632fe5966-config-volume\") pod \"coredns-668d6bf9bc-s4nw4\" (UID: \"b5c791f2-e56b-4246-a7ad-7bf632fe5966\") " pod="kube-system/coredns-668d6bf9bc-s4nw4" Dec 16 13:06:52.549983 kubelet[3151]: I1216 13:06:52.549957 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/399ccc54-212d-4af6-87a6-9294a2c74302-config-volume\") pod \"coredns-668d6bf9bc-4h4wj\" (UID: \"399ccc54-212d-4af6-87a6-9294a2c74302\") " pod="kube-system/coredns-668d6bf9bc-4h4wj" Dec 16 13:06:52.549983 kubelet[3151]: I1216 13:06:52.549978 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4bb\" (UniqueName: \"kubernetes.io/projected/b5c791f2-e56b-4246-a7ad-7bf632fe5966-kube-api-access-gd4bb\") pod \"coredns-668d6bf9bc-s4nw4\" (UID: \"b5c791f2-e56b-4246-a7ad-7bf632fe5966\") " pod="kube-system/coredns-668d6bf9bc-s4nw4" Dec 16 13:06:52.550140 kubelet[3151]: I1216 13:06:52.549998 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsz7\" (UniqueName: \"kubernetes.io/projected/399ccc54-212d-4af6-87a6-9294a2c74302-kube-api-access-nmsz7\") pod \"coredns-668d6bf9bc-4h4wj\" (UID: \"399ccc54-212d-4af6-87a6-9294a2c74302\") " pod="kube-system/coredns-668d6bf9bc-4h4wj" Dec 16 13:06:53.670010 containerd[1714]: time="2025-12-16T13:06:53.669954921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4nw4,Uid:b5c791f2-e56b-4246-a7ad-7bf632fe5966,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:53.683531 containerd[1714]: time="2025-12-16T13:06:53.683494410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h4wj,Uid:399ccc54-212d-4af6-87a6-9294a2c74302,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:54.424110 systemd-networkd[1335]: cilium_host: Link UP Dec 16 13:06:54.424730 systemd-networkd[1335]: cilium_net: Link UP Dec 16 13:06:54.424873 systemd-networkd[1335]: cilium_host: Gained carrier Dec 16 13:06:54.424981 systemd-networkd[1335]: cilium_net: Gained carrier Dec 16 13:06:54.554636 systemd-networkd[1335]: cilium_vxlan: Link UP Dec 16 13:06:54.554648 systemd-networkd[1335]: cilium_vxlan: Gained carrier Dec 16 13:06:54.725305 systemd-networkd[1335]: cilium_net: Gained IPv6LL Dec 16 13:06:54.732270 systemd-networkd[1335]: cilium_host: Gained IPv6LL Dec 16 13:06:54.780217 kernel: NET: Registered PF_ALG protocol family Dec 16 13:06:55.313575 systemd-networkd[1335]: lxc_health: Link UP Dec 16 13:06:55.319632 systemd-networkd[1335]: lxc_health: Gained carrier Dec 16 13:06:55.713241 kernel: eth0: renamed from tmp7462d Dec 16 13:06:55.716489 systemd-networkd[1335]: lxc816498cb850f: Link UP Dec 16 13:06:55.716738 systemd-networkd[1335]: lxc816498cb850f: Gained carrier Dec 16 13:06:55.733492 systemd-networkd[1335]: lxc5b554e018457: Link UP Dec 16 13:06:55.737221 kernel: eth0: renamed from tmp2b394 Dec 16 13:06:55.741322 systemd-networkd[1335]: lxc5b554e018457: Gained carrier Dec 16 13:06:56.012371 systemd-networkd[1335]: cilium_vxlan: Gained IPv6LL Dec 16 13:06:56.209133 kubelet[3151]: I1216 13:06:56.209076 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mqqkq" podStartSLOduration=30.298321953 podStartE2EDuration="41.209058333s" podCreationTimestamp="2025-12-16 13:06:15 +0000 UTC" firstStartedPulling="2025-12-16 13:06:16.319263549 +0000 UTC m=+6.390442841" lastFinishedPulling="2025-12-16 13:06:27.229999921 +0000 UTC m=+17.301179221" observedRunningTime="2025-12-16 13:06:53.177046331 +0000 UTC m=+43.248225626" watchObservedRunningTime="2025-12-16 13:06:56.209058333 +0000 UTC m=+46.280237628" Dec 16 13:06:56.908495 systemd-networkd[1335]: lxc_health: Gained IPv6LL Dec 16 13:06:57.100464 systemd-networkd[1335]: lxc5b554e018457: Gained IPv6LL Dec 16 13:06:57.612456 systemd-networkd[1335]: lxc816498cb850f: Gained IPv6LL Dec 16 13:06:58.709017 containerd[1714]: time="2025-12-16T13:06:58.708924981Z" level=info msg="connecting to shim 2b394fa113b00f95cfe5908918714ff38d42323d1a442e5aa38f2853751fd62c" address="unix:///run/containerd/s/fe3803809ad39fb4276aa9dc2b84e580b1b0a19b9700884e253322bece626a3d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:58.749880 systemd[1]: Started cri-containerd-2b394fa113b00f95cfe5908918714ff38d42323d1a442e5aa38f2853751fd62c.scope - libcontainer container 2b394fa113b00f95cfe5908918714ff38d42323d1a442e5aa38f2853751fd62c. Dec 16 13:06:58.754412 containerd[1714]: time="2025-12-16T13:06:58.754372691Z" level=info msg="connecting to shim 7462d5bc39be20fac9fb2d633612d96f2b7563179d454a7d74f94500289579ed" address="unix:///run/containerd/s/f8bdfe24986a57e9c0c9b25aca38e36adf6c1dba5220684cf568bdef1db35d41" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:58.790313 systemd[1]: Started cri-containerd-7462d5bc39be20fac9fb2d633612d96f2b7563179d454a7d74f94500289579ed.scope - libcontainer container 7462d5bc39be20fac9fb2d633612d96f2b7563179d454a7d74f94500289579ed. Dec 16 13:06:58.817996 containerd[1714]: time="2025-12-16T13:06:58.817966558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h4wj,Uid:399ccc54-212d-4af6-87a6-9294a2c74302,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b394fa113b00f95cfe5908918714ff38d42323d1a442e5aa38f2853751fd62c\"" Dec 16 13:06:58.822771 containerd[1714]: time="2025-12-16T13:06:58.822369630Z" level=info msg="CreateContainer within sandbox \"2b394fa113b00f95cfe5908918714ff38d42323d1a442e5aa38f2853751fd62c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:58.846692 containerd[1714]: time="2025-12-16T13:06:58.846669261Z" level=info msg="Container 98afc7adc4cf592a2f1c5ccee96e963a73d23fa648eb722ac74c7770d3dad8e3: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:58.850135 containerd[1714]: time="2025-12-16T13:06:58.850104984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4nw4,Uid:b5c791f2-e56b-4246-a7ad-7bf632fe5966,Namespace:kube-system,Attempt:0,} returns sandbox id \"7462d5bc39be20fac9fb2d633612d96f2b7563179d454a7d74f94500289579ed\"" Dec 16 13:06:58.852446 containerd[1714]: time="2025-12-16T13:06:58.852348047Z" level=info msg="CreateContainer within sandbox \"7462d5bc39be20fac9fb2d633612d96f2b7563179d454a7d74f94500289579ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:58.862671 containerd[1714]: time="2025-12-16T13:06:58.862651580Z" level=info msg="CreateContainer within sandbox \"2b394fa113b00f95cfe5908918714ff38d42323d1a442e5aa38f2853751fd62c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98afc7adc4cf592a2f1c5ccee96e963a73d23fa648eb722ac74c7770d3dad8e3\"" Dec 16 13:06:58.863493 containerd[1714]: time="2025-12-16T13:06:58.863335330Z" level=info msg="StartContainer for \"98afc7adc4cf592a2f1c5ccee96e963a73d23fa648eb722ac74c7770d3dad8e3\"" Dec 16 13:06:58.865903 containerd[1714]: time="2025-12-16T13:06:58.865849495Z" level=info msg="connecting to shim 98afc7adc4cf592a2f1c5ccee96e963a73d23fa648eb722ac74c7770d3dad8e3" address="unix:///run/containerd/s/fe3803809ad39fb4276aa9dc2b84e580b1b0a19b9700884e253322bece626a3d" protocol=ttrpc version=3 Dec 16 13:06:58.881548 containerd[1714]: time="2025-12-16T13:06:58.881525111Z" level=info msg="Container 335afcc1624db6878e2092d64c1acec1c076f1ac2292a22907309b81cb9c3d28: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:58.883327 systemd[1]: Started cri-containerd-98afc7adc4cf592a2f1c5ccee96e963a73d23fa648eb722ac74c7770d3dad8e3.scope - libcontainer container 98afc7adc4cf592a2f1c5ccee96e963a73d23fa648eb722ac74c7770d3dad8e3. Dec 16 13:06:58.897826 containerd[1714]: time="2025-12-16T13:06:58.897793311Z" level=info msg="CreateContainer within sandbox \"7462d5bc39be20fac9fb2d633612d96f2b7563179d454a7d74f94500289579ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"335afcc1624db6878e2092d64c1acec1c076f1ac2292a22907309b81cb9c3d28\"" Dec 16 13:06:58.899243 containerd[1714]: time="2025-12-16T13:06:58.898928055Z" level=info msg="StartContainer for \"335afcc1624db6878e2092d64c1acec1c076f1ac2292a22907309b81cb9c3d28\"" Dec 16 13:06:58.904439 containerd[1714]: time="2025-12-16T13:06:58.904405035Z" level=info msg="connecting to shim 335afcc1624db6878e2092d64c1acec1c076f1ac2292a22907309b81cb9c3d28" address="unix:///run/containerd/s/f8bdfe24986a57e9c0c9b25aca38e36adf6c1dba5220684cf568bdef1db35d41" protocol=ttrpc version=3 Dec 16 13:06:58.921760 containerd[1714]: time="2025-12-16T13:06:58.921210408Z" level=info msg="StartContainer for \"98afc7adc4cf592a2f1c5ccee96e963a73d23fa648eb722ac74c7770d3dad8e3\" returns successfully" Dec 16 13:06:58.928508 systemd[1]: Started cri-containerd-335afcc1624db6878e2092d64c1acec1c076f1ac2292a22907309b81cb9c3d28.scope - libcontainer container 335afcc1624db6878e2092d64c1acec1c076f1ac2292a22907309b81cb9c3d28. Dec 16 13:06:58.966573 containerd[1714]: time="2025-12-16T13:06:58.966048583Z" level=info msg="StartContainer for \"335afcc1624db6878e2092d64c1acec1c076f1ac2292a22907309b81cb9c3d28\" returns successfully" Dec 16 13:06:59.174852 kubelet[3151]: I1216 13:06:59.174446 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s4nw4" podStartSLOduration=44.174427808 podStartE2EDuration="44.174427808s" podCreationTimestamp="2025-12-16 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:59.174247268 +0000 UTC m=+49.245426562" watchObservedRunningTime="2025-12-16 13:06:59.174427808 +0000 UTC m=+49.245607104" Dec 16 13:07:43.822034 systemd[1]: Started sshd@7-10.200.0.34:22-10.200.16.10:39512.service - OpenSSH per-connection server daemon (10.200.16.10:39512). Dec 16 13:07:44.373087 sshd[4467]: Accepted publickey for core from 10.200.16.10 port 39512 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:44.373524 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:44.377656 systemd-logind[1688]: New session 10 of user core. Dec 16 13:07:44.385349 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:07:44.821062 sshd[4470]: Connection closed by 10.200.16.10 port 39512 Dec 16 13:07:44.821648 sshd-session[4467]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:44.824895 systemd[1]: sshd@7-10.200.0.34:22-10.200.16.10:39512.service: Deactivated successfully. Dec 16 13:07:44.826478 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:07:44.827479 systemd-logind[1688]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:07:44.828760 systemd-logind[1688]: Removed session 10. Dec 16 13:07:49.934130 systemd[1]: Started sshd@8-10.200.0.34:22-10.200.16.10:39526.service - OpenSSH per-connection server daemon (10.200.16.10:39526). Dec 16 13:07:50.481781 sshd[4485]: Accepted publickey for core from 10.200.16.10 port 39526 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:50.482951 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:50.486964 systemd-logind[1688]: New session 11 of user core. Dec 16 13:07:50.493325 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:07:50.921440 sshd[4488]: Connection closed by 10.200.16.10 port 39526 Dec 16 13:07:50.921965 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:50.925289 systemd[1]: sshd@8-10.200.0.34:22-10.200.16.10:39526.service: Deactivated successfully. Dec 16 13:07:50.927033 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:07:50.927799 systemd-logind[1688]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:07:50.930014 systemd-logind[1688]: Removed session 11. Dec 16 13:07:56.029388 systemd[1]: Started sshd@9-10.200.0.34:22-10.200.16.10:47476.service - OpenSSH per-connection server daemon (10.200.16.10:47476). Dec 16 13:07:56.582159 sshd[4501]: Accepted publickey for core from 10.200.16.10 port 47476 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:56.583284 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:56.587429 systemd-logind[1688]: New session 12 of user core. Dec 16 13:07:56.592321 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:07:57.019994 sshd[4504]: Connection closed by 10.200.16.10 port 47476 Dec 16 13:07:57.020551 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:57.023894 systemd[1]: sshd@9-10.200.0.34:22-10.200.16.10:47476.service: Deactivated successfully. Dec 16 13:07:57.025486 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:07:57.026347 systemd-logind[1688]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:07:57.027719 systemd-logind[1688]: Removed session 12. Dec 16 13:08:02.117397 systemd[1]: Started sshd@10-10.200.0.34:22-10.200.16.10:56728.service - OpenSSH per-connection server daemon (10.200.16.10:56728). Dec 16 13:08:02.665732 sshd[4517]: Accepted publickey for core from 10.200.16.10 port 56728 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:02.667132 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:02.672352 systemd-logind[1688]: New session 13 of user core. Dec 16 13:08:02.676351 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:08:03.104911 sshd[4520]: Connection closed by 10.200.16.10 port 56728 Dec 16 13:08:03.105601 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:03.108867 systemd[1]: sshd@10-10.200.0.34:22-10.200.16.10:56728.service: Deactivated successfully. Dec 16 13:08:03.110876 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:08:03.112231 systemd-logind[1688]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:08:03.113905 systemd-logind[1688]: Removed session 13. Dec 16 13:08:03.208015 systemd[1]: Started sshd@11-10.200.0.34:22-10.200.16.10:56740.service - OpenSSH per-connection server daemon (10.200.16.10:56740). Dec 16 13:08:03.756247 sshd[4533]: Accepted publickey for core from 10.200.16.10 port 56740 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:03.757237 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:03.761923 systemd-logind[1688]: New session 14 of user core. Dec 16 13:08:03.766349 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:08:04.227749 sshd[4537]: Connection closed by 10.200.16.10 port 56740 Dec 16 13:08:04.228390 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:04.231805 systemd[1]: sshd@11-10.200.0.34:22-10.200.16.10:56740.service: Deactivated successfully. Dec 16 13:08:04.233759 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:08:04.234719 systemd-logind[1688]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:08:04.236052 systemd-logind[1688]: Removed session 14. Dec 16 13:08:04.340088 systemd[1]: Started sshd@12-10.200.0.34:22-10.200.16.10:56742.service - OpenSSH per-connection server daemon (10.200.16.10:56742). Dec 16 13:08:04.886375 sshd[4547]: Accepted publickey for core from 10.200.16.10 port 56742 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:04.887459 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:04.891244 systemd-logind[1688]: New session 15 of user core. Dec 16 13:08:04.902315 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:08:05.323546 sshd[4550]: Connection closed by 10.200.16.10 port 56742 Dec 16 13:08:05.324294 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:05.327284 systemd[1]: sshd@12-10.200.0.34:22-10.200.16.10:56742.service: Deactivated successfully. Dec 16 13:08:05.328889 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:08:05.329785 systemd-logind[1688]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:08:05.330894 systemd-logind[1688]: Removed session 15. Dec 16 13:08:10.421256 systemd[1]: Started sshd@13-10.200.0.34:22-10.200.16.10:53680.service - OpenSSH per-connection server daemon (10.200.16.10:53680). Dec 16 13:08:10.969140 sshd[4565]: Accepted publickey for core from 10.200.16.10 port 53680 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:10.970248 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:10.974271 systemd-logind[1688]: New session 16 of user core. Dec 16 13:08:10.982362 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:08:11.405404 sshd[4568]: Connection closed by 10.200.16.10 port 53680 Dec 16 13:08:11.405906 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:11.409184 systemd[1]: sshd@13-10.200.0.34:22-10.200.16.10:53680.service: Deactivated successfully. Dec 16 13:08:11.410812 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:08:11.411985 systemd-logind[1688]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:08:11.413707 systemd-logind[1688]: Removed session 16. Dec 16 13:08:11.512182 systemd[1]: Started sshd@14-10.200.0.34:22-10.200.16.10:53686.service - OpenSSH per-connection server daemon (10.200.16.10:53686). Dec 16 13:08:12.061909 sshd[4580]: Accepted publickey for core from 10.200.16.10 port 53686 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:12.062983 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:12.067133 systemd-logind[1688]: New session 17 of user core. Dec 16 13:08:12.071320 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:08:12.536327 sshd[4583]: Connection closed by 10.200.16.10 port 53686 Dec 16 13:08:12.536882 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:12.540343 systemd[1]: sshd@14-10.200.0.34:22-10.200.16.10:53686.service: Deactivated successfully. Dec 16 13:08:12.542364 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:08:12.543126 systemd-logind[1688]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:08:12.544221 systemd-logind[1688]: Removed session 17. Dec 16 13:08:12.638208 systemd[1]: Started sshd@15-10.200.0.34:22-10.200.16.10:53698.service - OpenSSH per-connection server daemon (10.200.16.10:53698). Dec 16 13:08:13.187639 sshd[4593]: Accepted publickey for core from 10.200.16.10 port 53698 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:13.191362 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:13.195242 systemd-logind[1688]: New session 18 of user core. Dec 16 13:08:13.202334 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:08:13.920200 sshd[4596]: Connection closed by 10.200.16.10 port 53698 Dec 16 13:08:13.920791 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:13.924615 systemd[1]: sshd@15-10.200.0.34:22-10.200.16.10:53698.service: Deactivated successfully. Dec 16 13:08:13.926450 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:08:13.927183 systemd-logind[1688]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:08:13.928410 systemd-logind[1688]: Removed session 18. Dec 16 13:08:14.016417 systemd[1]: Started sshd@16-10.200.0.34:22-10.200.16.10:53702.service - OpenSSH per-connection server daemon (10.200.16.10:53702). Dec 16 13:08:14.561637 sshd[4613]: Accepted publickey for core from 10.200.16.10 port 53702 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:14.562722 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:14.567280 systemd-logind[1688]: New session 19 of user core. Dec 16 13:08:14.572337 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:08:15.078497 sshd[4616]: Connection closed by 10.200.16.10 port 53702 Dec 16 13:08:15.079435 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:15.082366 systemd[1]: sshd@16-10.200.0.34:22-10.200.16.10:53702.service: Deactivated successfully. Dec 16 13:08:15.084167 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:08:15.086110 systemd-logind[1688]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:08:15.087283 systemd-logind[1688]: Removed session 19. Dec 16 13:08:15.181568 systemd[1]: Started sshd@17-10.200.0.34:22-10.200.16.10:53716.service - OpenSSH per-connection server daemon (10.200.16.10:53716). Dec 16 13:08:15.728645 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 53716 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:15.730013 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:15.734634 systemd-logind[1688]: New session 20 of user core. Dec 16 13:08:15.740323 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:08:16.164384 sshd[4629]: Connection closed by 10.200.16.10 port 53716 Dec 16 13:08:16.164952 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:16.168361 systemd[1]: sshd@17-10.200.0.34:22-10.200.16.10:53716.service: Deactivated successfully. Dec 16 13:08:16.170090 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:08:16.170848 systemd-logind[1688]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:08:16.171999 systemd-logind[1688]: Removed session 20. Dec 16 13:08:20.202379 waagent[1902]: 2025-12-16T13:08:20.202334Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Dec 16 13:08:20.211080 waagent[1902]: 2025-12-16T13:08:20.211028Z INFO ExtHandler Dec 16 13:08:20.211209 waagent[1902]: 2025-12-16T13:08:20.211094Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Dec 16 13:08:20.263045 waagent[1902]: 2025-12-16T13:08:20.263005Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:08:20.339772 waagent[1902]: 2025-12-16T13:08:20.339711Z INFO ExtHandler Downloaded certificate {'thumbprint': '342A91527E17CF0AFDA707C616B2E7D57D88ABCD', 'hasPrivateKey': True} Dec 16 13:08:20.340225 waagent[1902]: 2025-12-16T13:08:20.340154Z INFO ExtHandler Fetch goal state completed Dec 16 13:08:20.340473 waagent[1902]: 2025-12-16T13:08:20.340447Z INFO ExtHandler ExtHandler Dec 16 13:08:20.340515 waagent[1902]: 2025-12-16T13:08:20.340495Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 8d5ff2d0-4f89-4c67-9e80-32e5a1858e31 correlation 2a66e3e6-89d5-4b9d-a5bb-535ffce17707 created: 2025-12-16T13:08:17.148323Z] Dec 16 13:08:20.340718 waagent[1902]: 2025-12-16T13:08:20.340696Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:08:20.341093 waagent[1902]: 2025-12-16T13:08:20.341069Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Dec 16 13:08:21.266144 systemd[1]: Started sshd@18-10.200.0.34:22-10.200.16.10:51198.service - OpenSSH per-connection server daemon (10.200.16.10:51198). Dec 16 13:08:21.815700 sshd[4650]: Accepted publickey for core from 10.200.16.10 port 51198 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:21.816927 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:21.821120 systemd-logind[1688]: New session 21 of user core. Dec 16 13:08:21.827315 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:08:22.251333 sshd[4653]: Connection closed by 10.200.16.10 port 51198 Dec 16 13:08:22.251903 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:22.254702 systemd[1]: sshd@18-10.200.0.34:22-10.200.16.10:51198.service: Deactivated successfully. Dec 16 13:08:22.256695 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:08:22.258133 systemd-logind[1688]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:08:22.259618 systemd-logind[1688]: Removed session 21. Dec 16 13:08:26.374269 waagent[1902]: 2025-12-16T13:08:26.374225Z INFO ExtHandler Dec 16 13:08:26.374636 waagent[1902]: 2025-12-16T13:08:26.374338Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3f591716-7957-47c8-8ef0-523951e3a2c9 eTag: 510341662219764624 source: Fabric] Dec 16 13:08:26.374668 waagent[1902]: 2025-12-16T13:08:26.374651Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:08:27.351392 systemd[1]: Started sshd@19-10.200.0.34:22-10.200.16.10:51212.service - OpenSSH per-connection server daemon (10.200.16.10:51212). Dec 16 13:08:27.907540 sshd[4665]: Accepted publickey for core from 10.200.16.10 port 51212 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:27.908648 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:27.912976 systemd-logind[1688]: New session 22 of user core. Dec 16 13:08:27.921312 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:08:28.343314 sshd[4668]: Connection closed by 10.200.16.10 port 51212 Dec 16 13:08:28.343994 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:28.348327 systemd[1]: sshd@19-10.200.0.34:22-10.200.16.10:51212.service: Deactivated successfully. Dec 16 13:08:28.349906 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:08:28.350596 systemd-logind[1688]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:08:28.352071 systemd-logind[1688]: Removed session 22. Dec 16 13:08:33.441346 systemd[1]: Started sshd@20-10.200.0.34:22-10.200.16.10:49826.service - OpenSSH per-connection server daemon (10.200.16.10:49826). Dec 16 13:08:33.989950 sshd[4679]: Accepted publickey for core from 10.200.16.10 port 49826 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:33.991202 sshd-session[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:33.995273 systemd-logind[1688]: New session 23 of user core. Dec 16 13:08:33.999325 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:08:34.426551 sshd[4682]: Connection closed by 10.200.16.10 port 49826 Dec 16 13:08:34.427086 sshd-session[4679]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:34.430289 systemd[1]: sshd@20-10.200.0.34:22-10.200.16.10:49826.service: Deactivated successfully. Dec 16 13:08:34.431863 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:08:34.432843 systemd-logind[1688]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:08:34.433988 systemd-logind[1688]: Removed session 23. Dec 16 13:08:34.524151 systemd[1]: Started sshd@21-10.200.0.34:22-10.200.16.10:49842.service - OpenSSH per-connection server daemon (10.200.16.10:49842). Dec 16 13:08:35.071392 sshd[4694]: Accepted publickey for core from 10.200.16.10 port 49842 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:35.072510 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:35.076251 systemd-logind[1688]: New session 24 of user core. Dec 16 13:08:35.083331 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:08:36.670754 kubelet[3151]: I1216 13:08:36.670687 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4h4wj" podStartSLOduration=141.670668251 podStartE2EDuration="2m21.670668251s" podCreationTimestamp="2025-12-16 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:59.211355207 +0000 UTC m=+49.282534505" watchObservedRunningTime="2025-12-16 13:08:36.670668251 +0000 UTC m=+146.741847557" Dec 16 13:08:36.684965 containerd[1714]: time="2025-12-16T13:08:36.684567170Z" level=info msg="StopContainer for \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" with timeout 30 (s)" Dec 16 13:08:36.686546 containerd[1714]: time="2025-12-16T13:08:36.686440023Z" level=info msg="Stop container \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" with signal terminated" Dec 16 13:08:36.698966 systemd[1]: cri-containerd-746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10.scope: Deactivated successfully. Dec 16 13:08:36.705593 containerd[1714]: time="2025-12-16T13:08:36.705566191Z" level=info msg="received container exit event container_id:\"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" id:\"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" pid:3678 exited_at:{seconds:1765890516 nanos:702156100}" Dec 16 13:08:36.706325 containerd[1714]: time="2025-12-16T13:08:36.706292239Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:08:36.714304 containerd[1714]: time="2025-12-16T13:08:36.714176879Z" level=info msg="StopContainer for \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" with timeout 2 (s)" Dec 16 13:08:36.714776 containerd[1714]: time="2025-12-16T13:08:36.714755628Z" level=info msg="Stop container \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" with signal terminated" Dec 16 13:08:36.723593 systemd-networkd[1335]: lxc_health: Link DOWN Dec 16 13:08:36.724502 systemd-networkd[1335]: lxc_health: Lost carrier Dec 16 13:08:36.737883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10-rootfs.mount: Deactivated successfully. Dec 16 13:08:36.739689 systemd[1]: cri-containerd-d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9.scope: Deactivated successfully. Dec 16 13:08:36.739955 systemd[1]: cri-containerd-d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9.scope: Consumed 5.251s CPU time, 124.5M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:08:36.742370 containerd[1714]: time="2025-12-16T13:08:36.742328063Z" level=info msg="received container exit event container_id:\"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" id:\"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" pid:3791 exited_at:{seconds:1765890516 nanos:742056407}" Dec 16 13:08:36.757916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9-rootfs.mount: Deactivated successfully. Dec 16 13:08:36.852575 containerd[1714]: time="2025-12-16T13:08:36.852547811Z" level=info msg="StopContainer for \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" returns successfully" Dec 16 13:08:36.853170 containerd[1714]: time="2025-12-16T13:08:36.853145174Z" level=info msg="StopPodSandbox for \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\"" Dec 16 13:08:36.853316 containerd[1714]: time="2025-12-16T13:08:36.853281244Z" level=info msg="Container to stop \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:08:36.853316 containerd[1714]: time="2025-12-16T13:08:36.853295028Z" level=info msg="Container to stop \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:08:36.853316 containerd[1714]: time="2025-12-16T13:08:36.853304847Z" level=info msg="Container to stop \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:08:36.853316 containerd[1714]: time="2025-12-16T13:08:36.853314079Z" level=info msg="Container to stop \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:08:36.853436 containerd[1714]: time="2025-12-16T13:08:36.853321989Z" level=info msg="Container to stop \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:08:36.857326 containerd[1714]: time="2025-12-16T13:08:36.857302411Z" level=info msg="StopContainer for \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" returns successfully" Dec 16 13:08:36.857661 containerd[1714]: time="2025-12-16T13:08:36.857642419Z" level=info msg="StopPodSandbox for \"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\"" Dec 16 13:08:36.857724 containerd[1714]: time="2025-12-16T13:08:36.857694143Z" level=info msg="Container to stop \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:08:36.861041 systemd[1]: cri-containerd-a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876.scope: Deactivated successfully. Dec 16 13:08:36.866417 systemd[1]: cri-containerd-b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705.scope: Deactivated successfully. Dec 16 13:08:36.867754 containerd[1714]: time="2025-12-16T13:08:36.867732270Z" level=info msg="received sandbox exit event container_id:\"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" id:\"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" exit_status:137 exited_at:{seconds:1765890516 nanos:866998147}" monitor_name=podsandbox Dec 16 13:08:36.868881 containerd[1714]: time="2025-12-16T13:08:36.868807506Z" level=info msg="received sandbox exit event container_id:\"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\" id:\"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\" exit_status:137 exited_at:{seconds:1765890516 nanos:868661588}" monitor_name=podsandbox Dec 16 13:08:36.892983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705-rootfs.mount: Deactivated successfully. Dec 16 13:08:36.893099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876-rootfs.mount: Deactivated successfully. Dec 16 13:08:36.911908 containerd[1714]: time="2025-12-16T13:08:36.911462388Z" level=info msg="shim disconnected" id=b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705 namespace=k8s.io Dec 16 13:08:36.911908 containerd[1714]: time="2025-12-16T13:08:36.911515611Z" level=warning msg="cleaning up after shim disconnected" id=b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705 namespace=k8s.io Dec 16 13:08:36.911908 containerd[1714]: time="2025-12-16T13:08:36.911523696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:08:36.912480 containerd[1714]: time="2025-12-16T13:08:36.912460688Z" level=info msg="shim disconnected" id=a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876 namespace=k8s.io Dec 16 13:08:36.912567 containerd[1714]: time="2025-12-16T13:08:36.912556454Z" level=warning msg="cleaning up after shim disconnected" id=a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876 namespace=k8s.io Dec 16 13:08:36.912635 containerd[1714]: time="2025-12-16T13:08:36.912604155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:08:36.924129 containerd[1714]: time="2025-12-16T13:08:36.924056970Z" level=info msg="received sandbox container exit event sandbox_id:\"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\" exit_status:137 exited_at:{seconds:1765890516 nanos:868661588}" monitor_name=criService Dec 16 13:08:36.925251 containerd[1714]: time="2025-12-16T13:08:36.925170840Z" level=info msg="received sandbox container exit event sandbox_id:\"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" exit_status:137 exited_at:{seconds:1765890516 nanos:866998147}" monitor_name=criService Dec 16 13:08:36.925442 containerd[1714]: time="2025-12-16T13:08:36.925419835Z" level=info msg="TearDown network for sandbox \"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\" successfully" Dec 16 13:08:36.925502 containerd[1714]: time="2025-12-16T13:08:36.925490163Z" level=info msg="StopPodSandbox for \"b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705\" returns successfully" Dec 16 13:08:36.927578 containerd[1714]: time="2025-12-16T13:08:36.927474711Z" level=info msg="TearDown network for sandbox \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" successfully" Dec 16 13:08:36.927578 containerd[1714]: time="2025-12-16T13:08:36.927504312Z" level=info msg="StopPodSandbox for \"a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876\" returns successfully" Dec 16 13:08:36.928154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b834ceb53daaafbe59353db1c2c32fcefae715af8db70f5c3fef56400c1ab705-shm.mount: Deactivated successfully. Dec 16 13:08:36.993102 kubelet[3151]: I1216 13:08:36.993071 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51605413-c1a6-459b-94a7-1f8fb1595ace-cilium-config-path\") pod \"51605413-c1a6-459b-94a7-1f8fb1595ace\" (UID: \"51605413-c1a6-459b-94a7-1f8fb1595ace\") " Dec 16 13:08:36.993102 kubelet[3151]: I1216 13:08:36.993104 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xp9w\" (UniqueName: \"kubernetes.io/projected/51605413-c1a6-459b-94a7-1f8fb1595ace-kube-api-access-2xp9w\") pod \"51605413-c1a6-459b-94a7-1f8fb1595ace\" (UID: \"51605413-c1a6-459b-94a7-1f8fb1595ace\") " Dec 16 13:08:36.993249 kubelet[3151]: I1216 13:08:36.993123 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-xtables-lock\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993249 kubelet[3151]: I1216 13:08:36.993142 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzw7\" (UniqueName: \"kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-kube-api-access-2jzw7\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993249 kubelet[3151]: I1216 13:08:36.993162 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-config-path\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993249 kubelet[3151]: I1216 13:08:36.993181 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a139832-356c-4e8f-99a9-988f7b0dc6a7-clustermesh-secrets\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993249 kubelet[3151]: I1216 13:08:36.993208 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-bpf-maps\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993249 kubelet[3151]: I1216 13:08:36.993224 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-cgroup\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993386 kubelet[3151]: I1216 13:08:36.993240 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hostproc\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993386 kubelet[3151]: I1216 13:08:36.993257 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hubble-tls\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993386 kubelet[3151]: I1216 13:08:36.993273 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-kernel\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993386 kubelet[3151]: I1216 13:08:36.993294 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cni-path\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993386 kubelet[3151]: I1216 13:08:36.993312 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-etc-cni-netd\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993386 kubelet[3151]: I1216 13:08:36.993329 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-net\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993506 kubelet[3151]: I1216 13:08:36.993346 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-run\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993506 kubelet[3151]: I1216 13:08:36.993362 3151 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-lib-modules\") pod \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\" (UID: \"8a139832-356c-4e8f-99a9-988f7b0dc6a7\") " Dec 16 13:08:36.993506 kubelet[3151]: I1216 13:08:36.993420 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.994211 kubelet[3151]: I1216 13:08:36.993591 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.994442 kubelet[3151]: I1216 13:08:36.994419 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.994685 kubelet[3151]: I1216 13:08:36.994507 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hostproc" (OuterVolumeSpecName: "hostproc") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.997376 kubelet[3151]: I1216 13:08:36.997345 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.997453 kubelet[3151]: I1216 13:08:36.997409 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cni-path" (OuterVolumeSpecName: "cni-path") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.997453 kubelet[3151]: I1216 13:08:36.997425 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.997453 kubelet[3151]: I1216 13:08:36.997438 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.997453 kubelet[3151]: I1216 13:08:36.997452 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.998163 kubelet[3151]: I1216 13:08:36.998142 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:08:36.999097 kubelet[3151]: I1216 13:08:36.999057 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51605413-c1a6-459b-94a7-1f8fb1595ace-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51605413-c1a6-459b-94a7-1f8fb1595ace" (UID: "51605413-c1a6-459b-94a7-1f8fb1595ace"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:08:37.000054 kubelet[3151]: I1216 13:08:36.999987 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:08:37.000198 kubelet[3151]: I1216 13:08:37.000159 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51605413-c1a6-459b-94a7-1f8fb1595ace-kube-api-access-2xp9w" (OuterVolumeSpecName: "kube-api-access-2xp9w") pod "51605413-c1a6-459b-94a7-1f8fb1595ace" (UID: "51605413-c1a6-459b-94a7-1f8fb1595ace"). InnerVolumeSpecName "kube-api-access-2xp9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:08:37.001178 kubelet[3151]: I1216 13:08:37.001147 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:08:37.001327 kubelet[3151]: I1216 13:08:37.001315 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-kube-api-access-2jzw7" (OuterVolumeSpecName: "kube-api-access-2jzw7") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "kube-api-access-2jzw7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:08:37.001434 kubelet[3151]: I1216 13:08:37.001410 3151 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a139832-356c-4e8f-99a9-988f7b0dc6a7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8a139832-356c-4e8f-99a9-988f7b0dc6a7" (UID: "8a139832-356c-4e8f-99a9-988f7b0dc6a7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:08:37.093916 kubelet[3151]: I1216 13:08:37.093891 3151 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-xtables-lock\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.093916 kubelet[3151]: I1216 13:08:37.093915 3151 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51605413-c1a6-459b-94a7-1f8fb1595ace-cilium-config-path\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093926 3151 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xp9w\" (UniqueName: \"kubernetes.io/projected/51605413-c1a6-459b-94a7-1f8fb1595ace-kube-api-access-2xp9w\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093937 3151 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-config-path\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093946 3151 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a139832-356c-4e8f-99a9-988f7b0dc6a7-clustermesh-secrets\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093954 3151 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jzw7\" (UniqueName: \"kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-kube-api-access-2jzw7\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093962 3151 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-cgroup\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093971 3151 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hostproc\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093979 3151 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a139832-356c-4e8f-99a9-988f7b0dc6a7-hubble-tls\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094037 kubelet[3151]: I1216 13:08:37.093987 3151 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-bpf-maps\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094222 kubelet[3151]: I1216 13:08:37.093995 3151 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-etc-cni-netd\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094222 kubelet[3151]: I1216 13:08:37.094004 3151 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-kernel\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094222 kubelet[3151]: I1216 13:08:37.094014 3151 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cni-path\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094222 kubelet[3151]: I1216 13:08:37.094025 3151 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-host-proc-sys-net\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094222 kubelet[3151]: I1216 13:08:37.094037 3151 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-cilium-run\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.094222 kubelet[3151]: I1216 13:08:37.094046 3151 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a139832-356c-4e8f-99a9-988f7b0dc6a7-lib-modules\") on node \"ci-4459.2.2-a-d193ee2782\" DevicePath \"\"" Dec 16 13:08:37.331519 kubelet[3151]: I1216 13:08:37.330436 3151 scope.go:117] "RemoveContainer" containerID="746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10" Dec 16 13:08:37.335142 containerd[1714]: time="2025-12-16T13:08:37.334713244Z" level=info msg="RemoveContainer for \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\"" Dec 16 13:08:37.336159 systemd[1]: Removed slice kubepods-besteffort-pod51605413_c1a6_459b_94a7_1f8fb1595ace.slice - libcontainer container kubepods-besteffort-pod51605413_c1a6_459b_94a7_1f8fb1595ace.slice. Dec 16 13:08:37.348931 systemd[1]: Removed slice kubepods-burstable-pod8a139832_356c_4e8f_99a9_988f7b0dc6a7.slice - libcontainer container kubepods-burstable-pod8a139832_356c_4e8f_99a9_988f7b0dc6a7.slice. Dec 16 13:08:37.349383 systemd[1]: kubepods-burstable-pod8a139832_356c_4e8f_99a9_988f7b0dc6a7.slice: Consumed 5.329s CPU time, 124.9M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:08:37.352430 containerd[1714]: time="2025-12-16T13:08:37.352401286Z" level=info msg="RemoveContainer for \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" returns successfully" Dec 16 13:08:37.352619 kubelet[3151]: I1216 13:08:37.352595 3151 scope.go:117] "RemoveContainer" containerID="746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10" Dec 16 13:08:37.352852 containerd[1714]: time="2025-12-16T13:08:37.352760944Z" level=error msg="ContainerStatus for \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\": not found" Dec 16 13:08:37.353966 kubelet[3151]: E1216 13:08:37.353916 3151 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\": not found" containerID="746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10" Dec 16 13:08:37.354265 kubelet[3151]: I1216 13:08:37.354111 3151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10"} err="failed to get container status \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\": rpc error: code = NotFound desc = an error occurred when try to find container \"746a724c458cc2e7c45190f1ce6ed18bc89ea6c1ad76ef5b19484c0666d03b10\": not found" Dec 16 13:08:37.354344 kubelet[3151]: I1216 13:08:37.354313 3151 scope.go:117] "RemoveContainer" containerID="d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9" Dec 16 13:08:37.356473 containerd[1714]: time="2025-12-16T13:08:37.356442520Z" level=info msg="RemoveContainer for \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\"" Dec 16 13:08:37.367627 containerd[1714]: time="2025-12-16T13:08:37.367588643Z" level=info msg="RemoveContainer for \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" returns successfully" Dec 16 13:08:37.367774 kubelet[3151]: I1216 13:08:37.367758 3151 scope.go:117] "RemoveContainer" containerID="a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917" Dec 16 13:08:37.369210 containerd[1714]: time="2025-12-16T13:08:37.369167629Z" level=info msg="RemoveContainer for \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\"" Dec 16 13:08:37.377699 containerd[1714]: time="2025-12-16T13:08:37.377662663Z" level=info msg="RemoveContainer for \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\" returns successfully" Dec 16 13:08:37.377873 kubelet[3151]: I1216 13:08:37.377844 3151 scope.go:117] "RemoveContainer" containerID="8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589" Dec 16 13:08:37.379500 containerd[1714]: time="2025-12-16T13:08:37.379480930Z" level=info msg="RemoveContainer for \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\"" Dec 16 13:08:37.387893 containerd[1714]: time="2025-12-16T13:08:37.387863882Z" level=info msg="RemoveContainer for \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\" returns successfully" Dec 16 13:08:37.388237 kubelet[3151]: I1216 13:08:37.388004 3151 scope.go:117] "RemoveContainer" containerID="f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814" Dec 16 13:08:37.389274 containerd[1714]: time="2025-12-16T13:08:37.389250301Z" level=info msg="RemoveContainer for \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\"" Dec 16 13:08:37.405517 containerd[1714]: time="2025-12-16T13:08:37.405487821Z" level=info msg="RemoveContainer for \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\" returns successfully" Dec 16 13:08:37.405647 kubelet[3151]: I1216 13:08:37.405631 3151 scope.go:117] "RemoveContainer" containerID="b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b" Dec 16 13:08:37.408133 containerd[1714]: time="2025-12-16T13:08:37.408107352Z" level=info msg="RemoveContainer for \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\"" Dec 16 13:08:37.415611 containerd[1714]: time="2025-12-16T13:08:37.415579242Z" level=info msg="RemoveContainer for \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" returns successfully" Dec 16 13:08:37.415731 kubelet[3151]: I1216 13:08:37.415714 3151 scope.go:117] "RemoveContainer" containerID="d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9" Dec 16 13:08:37.415938 containerd[1714]: time="2025-12-16T13:08:37.415908722Z" level=error msg="ContainerStatus for \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\": not found" Dec 16 13:08:37.416073 kubelet[3151]: E1216 13:08:37.416055 3151 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\": not found" containerID="d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9" Dec 16 13:08:37.416114 kubelet[3151]: I1216 13:08:37.416077 3151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9"} err="failed to get container status \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d82380adf8207c9590f9a364f014aba78fd34eeed05440418cc91cbb14bc89f9\": not found" Dec 16 13:08:37.416114 kubelet[3151]: I1216 13:08:37.416097 3151 scope.go:117] "RemoveContainer" containerID="a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917" Dec 16 13:08:37.416281 containerd[1714]: time="2025-12-16T13:08:37.416256205Z" level=error msg="ContainerStatus for \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\": not found" Dec 16 13:08:37.416360 kubelet[3151]: E1216 13:08:37.416344 3151 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\": not found" containerID="a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917" Dec 16 13:08:37.416428 kubelet[3151]: I1216 13:08:37.416360 3151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917"} err="failed to get container status \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\": rpc error: code = NotFound desc = an error occurred when try to find container \"a00290ca6ffb0e2d10347e1444752d5b25ee1c85ce30cc87e7317e310dc72917\": not found" Dec 16 13:08:37.416428 kubelet[3151]: I1216 13:08:37.416425 3151 scope.go:117] "RemoveContainer" containerID="8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589" Dec 16 13:08:37.416587 containerd[1714]: time="2025-12-16T13:08:37.416566402Z" level=error msg="ContainerStatus for \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\": not found" Dec 16 13:08:37.416692 kubelet[3151]: E1216 13:08:37.416673 3151 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\": not found" containerID="8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589" Dec 16 13:08:37.416723 kubelet[3151]: I1216 13:08:37.416697 3151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589"} err="failed to get container status \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e5368325794e47dd3e9782a4c5e185ac67b2888f26a8bb3582b028ea0154589\": not found" Dec 16 13:08:37.416723 kubelet[3151]: I1216 13:08:37.416710 3151 scope.go:117] "RemoveContainer" containerID="f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814" Dec 16 13:08:37.416925 containerd[1714]: time="2025-12-16T13:08:37.416854608Z" level=error msg="ContainerStatus for \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\": not found" Dec 16 13:08:37.417031 kubelet[3151]: E1216 13:08:37.417005 3151 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\": not found" containerID="f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814" Dec 16 13:08:37.417031 kubelet[3151]: I1216 13:08:37.417023 3151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814"} err="failed to get container status \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3126e9e801dbcc03d28ae0cba865e9589f477a24ff83aef0f9f183eb02ae814\": not found" Dec 16 13:08:37.417092 kubelet[3151]: I1216 13:08:37.417037 3151 scope.go:117] "RemoveContainer" containerID="b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b" Dec 16 13:08:37.417297 containerd[1714]: time="2025-12-16T13:08:37.417241835Z" level=error msg="ContainerStatus for \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\": not found" Dec 16 13:08:37.417363 kubelet[3151]: E1216 13:08:37.417351 3151 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\": not found" containerID="b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b" Dec 16 13:08:37.417418 kubelet[3151]: I1216 13:08:37.417367 3151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b"} err="failed to get container status \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5513da88905f4d883f2275b2ce0708ab384b88e9c517e2e3ca724301220731b\": not found" Dec 16 13:08:37.736706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a72e9896ccf56bce32941b5b2a8534e03f314621af4db8f4962a46fe25c55876-shm.mount: Deactivated successfully. Dec 16 13:08:37.736803 systemd[1]: var-lib-kubelet-pods-51605413\x2dc1a6\x2d459b\x2d94a7\x2d1f8fb1595ace-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xp9w.mount: Deactivated successfully. Dec 16 13:08:37.736865 systemd[1]: var-lib-kubelet-pods-8a139832\x2d356c\x2d4e8f\x2d99a9\x2d988f7b0dc6a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jzw7.mount: Deactivated successfully. Dec 16 13:08:37.736923 systemd[1]: var-lib-kubelet-pods-8a139832\x2d356c\x2d4e8f\x2d99a9\x2d988f7b0dc6a7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:08:37.736966 systemd[1]: var-lib-kubelet-pods-8a139832\x2d356c\x2d4e8f\x2d99a9\x2d988f7b0dc6a7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:08:38.012273 kubelet[3151]: I1216 13:08:38.011979 3151 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51605413-c1a6-459b-94a7-1f8fb1595ace" path="/var/lib/kubelet/pods/51605413-c1a6-459b-94a7-1f8fb1595ace/volumes" Dec 16 13:08:38.012707 kubelet[3151]: I1216 13:08:38.012521 3151 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a139832-356c-4e8f-99a9-988f7b0dc6a7" path="/var/lib/kubelet/pods/8a139832-356c-4e8f-99a9-988f7b0dc6a7/volumes" Dec 16 13:08:38.698415 sshd[4697]: Connection closed by 10.200.16.10 port 49842 Dec 16 13:08:38.698979 sshd-session[4694]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:38.701968 systemd[1]: sshd@21-10.200.0.34:22-10.200.16.10:49842.service: Deactivated successfully. Dec 16 13:08:38.703700 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:08:38.706071 systemd-logind[1688]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:08:38.706915 systemd-logind[1688]: Removed session 24. Dec 16 13:08:38.802086 systemd[1]: Started sshd@22-10.200.0.34:22-10.200.16.10:49844.service - OpenSSH per-connection server daemon (10.200.16.10:49844). Dec 16 13:08:39.355357 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 49844 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:39.356526 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:39.360851 systemd-logind[1688]: New session 25 of user core. Dec 16 13:08:39.366353 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:08:40.093433 kubelet[3151]: I1216 13:08:40.093397 3151 memory_manager.go:355] "RemoveStaleState removing state" podUID="51605413-c1a6-459b-94a7-1f8fb1595ace" containerName="cilium-operator" Dec 16 13:08:40.093433 kubelet[3151]: I1216 13:08:40.093434 3151 memory_manager.go:355] "RemoveStaleState removing state" podUID="8a139832-356c-4e8f-99a9-988f7b0dc6a7" containerName="cilium-agent" Dec 16 13:08:40.104690 kubelet[3151]: E1216 13:08:40.104653 3151 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:08:40.109616 systemd[1]: Created slice kubepods-burstable-pod2a783068_5f89_40e2_bc6f_c415b6dc52ea.slice - libcontainer container kubepods-burstable-pod2a783068_5f89_40e2_bc6f_c415b6dc52ea.slice. Dec 16 13:08:40.127130 sshd[4845]: Connection closed by 10.200.16.10 port 49844 Dec 16 13:08:40.129815 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:40.133213 systemd-logind[1688]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:08:40.134079 systemd[1]: sshd@22-10.200.0.34:22-10.200.16.10:49844.service: Deactivated successfully. Dec 16 13:08:40.137104 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:08:40.139789 systemd-logind[1688]: Removed session 25. Dec 16 13:08:40.212247 kubelet[3151]: I1216 13:08:40.212210 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-cni-path\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212247 kubelet[3151]: I1216 13:08:40.212253 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-lib-modules\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212247 kubelet[3151]: I1216 13:08:40.212272 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-host-proc-sys-kernel\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212529 kubelet[3151]: I1216 13:08:40.212291 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7d24\" (UniqueName: \"kubernetes.io/projected/2a783068-5f89-40e2-bc6f-c415b6dc52ea-kube-api-access-z7d24\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212529 kubelet[3151]: I1216 13:08:40.212308 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-cilium-run\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212529 kubelet[3151]: I1216 13:08:40.212323 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-cilium-cgroup\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212529 kubelet[3151]: I1216 13:08:40.212338 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a783068-5f89-40e2-bc6f-c415b6dc52ea-clustermesh-secrets\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212529 kubelet[3151]: I1216 13:08:40.212357 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-etc-cni-netd\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212529 kubelet[3151]: I1216 13:08:40.212374 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-bpf-maps\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212648 kubelet[3151]: I1216 13:08:40.212390 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-xtables-lock\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212648 kubelet[3151]: I1216 13:08:40.212405 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a783068-5f89-40e2-bc6f-c415b6dc52ea-cilium-ipsec-secrets\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212648 kubelet[3151]: I1216 13:08:40.212423 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-host-proc-sys-net\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212648 kubelet[3151]: I1216 13:08:40.212440 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a783068-5f89-40e2-bc6f-c415b6dc52ea-hostproc\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212648 kubelet[3151]: I1216 13:08:40.212456 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a783068-5f89-40e2-bc6f-c415b6dc52ea-cilium-config-path\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.212648 kubelet[3151]: I1216 13:08:40.212471 3151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a783068-5f89-40e2-bc6f-c415b6dc52ea-hubble-tls\") pod \"cilium-7flpr\" (UID: \"2a783068-5f89-40e2-bc6f-c415b6dc52ea\") " pod="kube-system/cilium-7flpr" Dec 16 13:08:40.223057 systemd[1]: Started sshd@23-10.200.0.34:22-10.200.16.10:40302.service - OpenSSH per-connection server daemon (10.200.16.10:40302). Dec 16 13:08:40.415123 containerd[1714]: time="2025-12-16T13:08:40.414671159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7flpr,Uid:2a783068-5f89-40e2-bc6f-c415b6dc52ea,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:40.453586 containerd[1714]: time="2025-12-16T13:08:40.453546572Z" level=info msg="connecting to shim 63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5" address="unix:///run/containerd/s/34fcccd7a9a2f56ffe3cb9dc1521689986ac1a6e07e7dc4d3dc432c747cc2b1b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:40.478338 systemd[1]: Started cri-containerd-63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5.scope - libcontainer container 63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5. Dec 16 13:08:40.515928 containerd[1714]: time="2025-12-16T13:08:40.515899295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7flpr,Uid:2a783068-5f89-40e2-bc6f-c415b6dc52ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\"" Dec 16 13:08:40.518401 containerd[1714]: time="2025-12-16T13:08:40.518373539Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:08:40.533342 containerd[1714]: time="2025-12-16T13:08:40.533316618Z" level=info msg="Container af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:40.547513 containerd[1714]: time="2025-12-16T13:08:40.547488166Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890\"" Dec 16 13:08:40.547802 containerd[1714]: time="2025-12-16T13:08:40.547787047Z" level=info msg="StartContainer for \"af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890\"" Dec 16 13:08:40.548874 containerd[1714]: time="2025-12-16T13:08:40.548679726Z" level=info msg="connecting to shim af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890" address="unix:///run/containerd/s/34fcccd7a9a2f56ffe3cb9dc1521689986ac1a6e07e7dc4d3dc432c747cc2b1b" protocol=ttrpc version=3 Dec 16 13:08:40.565345 systemd[1]: Started cri-containerd-af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890.scope - libcontainer container af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890. Dec 16 13:08:40.594580 containerd[1714]: time="2025-12-16T13:08:40.594549418Z" level=info msg="StartContainer for \"af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890\" returns successfully" Dec 16 13:08:40.597006 systemd[1]: cri-containerd-af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890.scope: Deactivated successfully. Dec 16 13:08:40.600587 containerd[1714]: time="2025-12-16T13:08:40.600446640Z" level=info msg="received container exit event container_id:\"af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890\" id:\"af46d9b8291eb9d5e78a17cdd65332cd164a106ade6cfc9233d27c0f58df7890\" pid:4922 exited_at:{seconds:1765890520 nanos:599923859}" Dec 16 13:08:40.769903 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 40302 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:40.771399 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:40.775255 systemd-logind[1688]: New session 26 of user core. Dec 16 13:08:40.788317 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:08:41.155135 sshd[4955]: Connection closed by 10.200.16.10 port 40302 Dec 16 13:08:41.155689 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:41.159181 systemd[1]: sshd@23-10.200.0.34:22-10.200.16.10:40302.service: Deactivated successfully. Dec 16 13:08:41.162751 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:08:41.165085 systemd-logind[1688]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:08:41.166538 systemd-logind[1688]: Removed session 26. Dec 16 13:08:41.257117 systemd[1]: Started sshd@24-10.200.0.34:22-10.200.16.10:40304.service - OpenSSH per-connection server daemon (10.200.16.10:40304). Dec 16 13:08:41.358132 containerd[1714]: time="2025-12-16T13:08:41.357953118Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:08:41.383771 containerd[1714]: time="2025-12-16T13:08:41.383743928Z" level=info msg="Container 9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:41.386806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151149010.mount: Deactivated successfully. Dec 16 13:08:41.415166 containerd[1714]: time="2025-12-16T13:08:41.415080550Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d\"" Dec 16 13:08:41.416002 containerd[1714]: time="2025-12-16T13:08:41.415891245Z" level=info msg="StartContainer for \"9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d\"" Dec 16 13:08:41.417233 containerd[1714]: time="2025-12-16T13:08:41.417165625Z" level=info msg="connecting to shim 9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d" address="unix:///run/containerd/s/34fcccd7a9a2f56ffe3cb9dc1521689986ac1a6e07e7dc4d3dc432c747cc2b1b" protocol=ttrpc version=3 Dec 16 13:08:41.439361 systemd[1]: Started cri-containerd-9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d.scope - libcontainer container 9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d. Dec 16 13:08:41.471491 systemd[1]: cri-containerd-9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d.scope: Deactivated successfully. Dec 16 13:08:41.471757 containerd[1714]: time="2025-12-16T13:08:41.471680004Z" level=info msg="StartContainer for \"9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d\" returns successfully" Dec 16 13:08:41.473603 containerd[1714]: time="2025-12-16T13:08:41.473398554Z" level=info msg="received container exit event container_id:\"9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d\" id:\"9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d\" pid:4978 exited_at:{seconds:1765890521 nanos:473163161}" Dec 16 13:08:41.490276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9446f6e53740ad7ad625283e5e65070284e6248e8340fa06685605ecd366f25d-rootfs.mount: Deactivated successfully. Dec 16 13:08:41.811651 sshd[4962]: Accepted publickey for core from 10.200.16.10 port 40304 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:41.812836 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:41.817304 systemd-logind[1688]: New session 27 of user core. Dec 16 13:08:41.821452 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:08:42.362026 containerd[1714]: time="2025-12-16T13:08:42.361710142Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:08:42.385058 containerd[1714]: time="2025-12-16T13:08:42.384992800Z" level=info msg="Container 7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:42.402623 containerd[1714]: time="2025-12-16T13:08:42.402588675Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35\"" Dec 16 13:08:42.403291 containerd[1714]: time="2025-12-16T13:08:42.403266670Z" level=info msg="StartContainer for \"7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35\"" Dec 16 13:08:42.405385 containerd[1714]: time="2025-12-16T13:08:42.405348974Z" level=info msg="connecting to shim 7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35" address="unix:///run/containerd/s/34fcccd7a9a2f56ffe3cb9dc1521689986ac1a6e07e7dc4d3dc432c747cc2b1b" protocol=ttrpc version=3 Dec 16 13:08:42.425346 systemd[1]: Started cri-containerd-7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35.scope - libcontainer container 7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35. Dec 16 13:08:42.482527 kubelet[3151]: I1216 13:08:42.481348 3151 setters.go:602] "Node became not ready" node="ci-4459.2.2-a-d193ee2782" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:08:42Z","lastTransitionTime":"2025-12-16T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:08:42.487724 systemd[1]: cri-containerd-7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35.scope: Deactivated successfully. Dec 16 13:08:42.490139 containerd[1714]: time="2025-12-16T13:08:42.490037140Z" level=info msg="received container exit event container_id:\"7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35\" id:\"7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35\" pid:5030 exited_at:{seconds:1765890522 nanos:488790915}" Dec 16 13:08:42.492134 containerd[1714]: time="2025-12-16T13:08:42.492070981Z" level=info msg="StartContainer for \"7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35\" returns successfully" Dec 16 13:08:42.513884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7197f8f70b3664e57bfc2a570c020d9936f31bf77e5ffc1fe9bec5fe09036d35-rootfs.mount: Deactivated successfully. Dec 16 13:08:43.367045 containerd[1714]: time="2025-12-16T13:08:43.366993747Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:08:43.385368 containerd[1714]: time="2025-12-16T13:08:43.385337474Z" level=info msg="Container 126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:43.403397 containerd[1714]: time="2025-12-16T13:08:43.403368441Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c\"" Dec 16 13:08:43.403875 containerd[1714]: time="2025-12-16T13:08:43.403793292Z" level=info msg="StartContainer for \"126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c\"" Dec 16 13:08:43.404934 containerd[1714]: time="2025-12-16T13:08:43.404908778Z" level=info msg="connecting to shim 126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c" address="unix:///run/containerd/s/34fcccd7a9a2f56ffe3cb9dc1521689986ac1a6e07e7dc4d3dc432c747cc2b1b" protocol=ttrpc version=3 Dec 16 13:08:43.431348 systemd[1]: Started cri-containerd-126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c.scope - libcontainer container 126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c. Dec 16 13:08:43.452840 systemd[1]: cri-containerd-126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c.scope: Deactivated successfully. Dec 16 13:08:43.457009 containerd[1714]: time="2025-12-16T13:08:43.456851655Z" level=info msg="received container exit event container_id:\"126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c\" id:\"126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c\" pid:5071 exited_at:{seconds:1765890523 nanos:453743736}" Dec 16 13:08:43.468580 containerd[1714]: time="2025-12-16T13:08:43.468550766Z" level=info msg="StartContainer for \"126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c\" returns successfully" Dec 16 13:08:43.480737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-126b18c9e51d8ea9e584cba542f4226b73a8d76ac6bf5621f9d6f1ab4f9f606c-rootfs.mount: Deactivated successfully. Dec 16 13:08:44.373219 containerd[1714]: time="2025-12-16T13:08:44.373138665Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:08:44.399214 containerd[1714]: time="2025-12-16T13:08:44.397932983Z" level=info msg="Container e89638ace0f47e6b2b55912ea7b0e07820ef548f90dec3bf6fb19356e73cf5fd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:44.401085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608961840.mount: Deactivated successfully. Dec 16 13:08:44.419329 containerd[1714]: time="2025-12-16T13:08:44.419300553Z" level=info msg="CreateContainer within sandbox \"63977b86490fad0fc247d93a4ba6932a50bb12d02e29809c280a45bc8303aac5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e89638ace0f47e6b2b55912ea7b0e07820ef548f90dec3bf6fb19356e73cf5fd\"" Dec 16 13:08:44.419834 containerd[1714]: time="2025-12-16T13:08:44.419810937Z" level=info msg="StartContainer for \"e89638ace0f47e6b2b55912ea7b0e07820ef548f90dec3bf6fb19356e73cf5fd\"" Dec 16 13:08:44.420883 containerd[1714]: time="2025-12-16T13:08:44.420851431Z" level=info msg="connecting to shim e89638ace0f47e6b2b55912ea7b0e07820ef548f90dec3bf6fb19356e73cf5fd" address="unix:///run/containerd/s/34fcccd7a9a2f56ffe3cb9dc1521689986ac1a6e07e7dc4d3dc432c747cc2b1b" protocol=ttrpc version=3 Dec 16 13:08:44.440348 systemd[1]: Started cri-containerd-e89638ace0f47e6b2b55912ea7b0e07820ef548f90dec3bf6fb19356e73cf5fd.scope - libcontainer container e89638ace0f47e6b2b55912ea7b0e07820ef548f90dec3bf6fb19356e73cf5fd. Dec 16 13:08:44.491366 containerd[1714]: time="2025-12-16T13:08:44.491322860Z" level=info msg="StartContainer for \"e89638ace0f47e6b2b55912ea7b0e07820ef548f90dec3bf6fb19356e73cf5fd\" returns successfully" Dec 16 13:08:44.807229 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Dec 16 13:08:45.391932 kubelet[3151]: I1216 13:08:45.391869 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7flpr" podStartSLOduration=5.391852706 podStartE2EDuration="5.391852706s" podCreationTimestamp="2025-12-16 13:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:08:45.391571903 +0000 UTC m=+155.462751200" watchObservedRunningTime="2025-12-16 13:08:45.391852706 +0000 UTC m=+155.463032004" Dec 16 13:08:47.377038 systemd-networkd[1335]: lxc_health: Link UP Dec 16 13:08:47.378424 systemd-networkd[1335]: lxc_health: Gained carrier Dec 16 13:08:48.460370 systemd-networkd[1335]: lxc_health: Gained IPv6LL Dec 16 13:08:52.822566 sshd[5010]: Connection closed by 10.200.16.10 port 40304 Dec 16 13:08:52.823146 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:52.826619 systemd-logind[1688]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:08:52.826857 systemd[1]: sshd@24-10.200.0.34:22-10.200.16.10:40304.service: Deactivated successfully. Dec 16 13:08:52.829043 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:08:52.830794 systemd-logind[1688]: Removed session 27.