Dec 12 18:41:07.106275 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:41:07.106334 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:41:07.106347 kernel: BIOS-provided physical RAM map: Dec 12 18:41:07.106354 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:41:07.106362 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 12 18:41:07.106368 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 12 18:41:07.106376 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 12 18:41:07.106383 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 12 18:41:07.106390 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 12 18:41:07.106398 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 12 18:41:07.106405 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 12 18:41:07.106412 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 12 18:41:07.106419 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 12 18:41:07.106426 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 12 18:41:07.106435 kernel: NX (Execute Disable) protection: active Dec 12 18:41:07.106445 kernel: APIC: Static calls initialized Dec 12 18:41:07.106452 kernel: efi: EFI v2.7 by Microsoft Dec 12 18:41:07.106460 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa3018 RNG=0x3ffd2018 Dec 12 18:41:07.106467 kernel: random: crng init done Dec 12 18:41:07.106474 kernel: secureboot: Secure boot disabled Dec 12 18:41:07.106481 kernel: SMBIOS 3.1.0 present. Dec 12 18:41:07.106488 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 12 18:41:07.106496 kernel: DMI: Memory slots populated: 2/2 Dec 12 18:41:07.106503 kernel: Hypervisor detected: Microsoft Hyper-V Dec 12 18:41:07.106511 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 12 18:41:07.106518 kernel: Hyper-V: Nested features: 0x3e0101 Dec 12 18:41:07.106528 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 12 18:41:07.106535 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 12 18:41:07.106542 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 12 18:41:07.106550 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 12 18:41:07.106557 kernel: tsc: Detected 2299.999 MHz processor Dec 12 18:41:07.106564 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:41:07.106572 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:41:07.106580 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 12 18:41:07.106588 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:41:07.106596 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:41:07.106606 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 12 18:41:07.106614 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 12 18:41:07.106621 kernel: Using GB pages for direct mapping Dec 12 18:41:07.106629 kernel: ACPI: Early table checksum verification disabled Dec 12 18:41:07.106640 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 12 18:41:07.106648 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:41:07.106657 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:41:07.106665 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 12 18:41:07.106673 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 12 18:41:07.106682 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:41:07.106690 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:41:07.106698 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:41:07.106706 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 12 18:41:07.106715 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 12 18:41:07.106723 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:41:07.106731 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 12 18:41:07.106738 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 12 18:41:07.106746 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 12 18:41:07.106754 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 12 18:41:07.106762 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 12 18:41:07.106770 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 12 18:41:07.106778 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 12 18:41:07.106789 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 12 18:41:07.106797 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 12 18:41:07.106804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 12 18:41:07.106812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 12 18:41:07.106821 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 12 18:41:07.106828 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 12 18:41:07.106835 kernel: Zone ranges: Dec 12 18:41:07.106843 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:41:07.106851 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:41:07.106861 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 12 18:41:07.106869 kernel: Device empty Dec 12 18:41:07.106877 kernel: Movable zone start for each node Dec 12 18:41:07.106886 kernel: Early memory node ranges Dec 12 18:41:07.106893 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 12 18:41:07.106901 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 12 18:41:07.106907 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 12 18:41:07.106914 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 12 18:41:07.106921 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 12 18:41:07.106931 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 12 18:41:07.106940 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:41:07.106948 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 12 18:41:07.106956 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 12 18:41:07.106965 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 12 18:41:07.106973 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 12 18:41:07.106980 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 12 18:41:07.106987 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:41:07.106995 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:41:07.107003 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:41:07.107010 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 12 18:41:07.107017 kernel: TSC deadline timer available Dec 12 18:41:07.107024 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:41:07.107032 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:41:07.107039 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:41:07.107046 kernel: CPU topo: Max. threads per core: 2 Dec 12 18:41:07.107054 kernel: CPU topo: Num. cores per package: 1 Dec 12 18:41:07.107061 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:41:07.107068 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:41:07.107078 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 12 18:41:07.107086 kernel: Booting paravirtualized kernel on Hyper-V Dec 12 18:41:07.107093 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:41:07.107101 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:41:07.107109 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:41:07.107117 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:41:07.107125 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:41:07.107132 kernel: Hyper-V: PV spinlocks enabled Dec 12 18:41:07.107142 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:41:07.107151 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:41:07.107159 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 12 18:41:07.107167 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:41:07.107174 kernel: Fallback order for Node 0: 0 Dec 12 18:41:07.107182 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 12 18:41:07.107189 kernel: Policy zone: Normal Dec 12 18:41:07.107196 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:41:07.107204 kernel: software IO TLB: area num 2. Dec 12 18:41:07.107213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:41:07.107221 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:41:07.107229 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:41:07.107236 kernel: Dynamic Preempt: voluntary Dec 12 18:41:07.107244 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:41:07.107256 kernel: rcu: RCU event tracing is enabled. Dec 12 18:41:07.107272 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:41:07.107282 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:41:07.107303 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:41:07.107313 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:41:07.107321 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:41:07.107331 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:41:07.107340 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:41:07.107348 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:41:07.107357 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:41:07.107366 kernel: Using NULL legacy PIC Dec 12 18:41:07.107376 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 12 18:41:07.107384 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:41:07.107392 kernel: Console: colour dummy device 80x25 Dec 12 18:41:07.107400 kernel: printk: legacy console [tty1] enabled Dec 12 18:41:07.107408 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:41:07.107416 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 12 18:41:07.107425 kernel: ACPI: Core revision 20240827 Dec 12 18:41:07.107433 kernel: Failed to register legacy timer interrupt Dec 12 18:41:07.107441 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:41:07.107451 kernel: x2apic enabled Dec 12 18:41:07.107459 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:41:07.107468 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 12 18:41:07.107476 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 12 18:41:07.107484 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 12 18:41:07.107493 kernel: Hyper-V: Using IPI hypercalls Dec 12 18:41:07.107501 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 12 18:41:07.107510 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 12 18:41:07.107518 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 12 18:41:07.107529 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 12 18:41:07.107537 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 12 18:41:07.107546 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 12 18:41:07.107555 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 12 18:41:07.107563 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Dec 12 18:41:07.107572 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:41:07.107580 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:41:07.107589 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:41:07.107597 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:41:07.107607 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:41:07.107615 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:41:07.107624 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 12 18:41:07.107632 kernel: RETBleed: Vulnerable Dec 12 18:41:07.107641 kernel: Speculative Store Bypass: Vulnerable Dec 12 18:41:07.107649 kernel: active return thunk: its_return_thunk Dec 12 18:41:07.107657 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:41:07.107666 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:41:07.107674 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:41:07.107682 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:41:07.107692 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 12 18:41:07.107703 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 12 18:41:07.107711 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 12 18:41:07.107720 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 12 18:41:07.107728 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 12 18:41:07.107736 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 12 18:41:07.107744 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:41:07.107752 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 12 18:41:07.107760 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 12 18:41:07.107768 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 12 18:41:07.107776 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 12 18:41:07.107784 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 12 18:41:07.107794 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 12 18:41:07.107802 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 12 18:41:07.107810 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:41:07.107818 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:41:07.107827 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:41:07.107835 kernel: landlock: Up and running. Dec 12 18:41:07.107844 kernel: SELinux: Initializing. Dec 12 18:41:07.107853 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:41:07.107862 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:41:07.107871 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 12 18:41:07.107879 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 12 18:41:07.107889 kernel: signal: max sigframe size: 11952 Dec 12 18:41:07.107899 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:41:07.107908 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:41:07.107917 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:41:07.107926 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:41:07.107934 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:41:07.107943 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:41:07.107952 kernel: .... node #0, CPUs: #1 Dec 12 18:41:07.107961 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:41:07.107970 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 12 18:41:07.107981 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 308180K reserved, 0K cma-reserved) Dec 12 18:41:07.107990 kernel: devtmpfs: initialized Dec 12 18:41:07.107998 kernel: x86/mm: Memory block size: 128MB Dec 12 18:41:07.108007 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 12 18:41:07.108015 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:41:07.108023 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:41:07.108031 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:41:07.108040 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:41:07.108049 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:41:07.108060 kernel: audit: type=2000 audit(1765564863.077:1): state=initialized audit_enabled=0 res=1 Dec 12 18:41:07.108069 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:41:07.108077 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:41:07.108086 kernel: cpuidle: using governor menu Dec 12 18:41:07.108094 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:41:07.108102 kernel: dca service started, version 1.12.1 Dec 12 18:41:07.108111 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 12 18:41:07.108120 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 12 18:41:07.108131 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:41:07.108140 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:41:07.108149 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:41:07.108158 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:41:07.108166 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:41:07.108175 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:41:07.108184 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:41:07.108192 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:41:07.108201 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:41:07.108212 kernel: ACPI: Interpreter enabled Dec 12 18:41:07.108221 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:41:07.108229 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:41:07.108238 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:41:07.108247 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 12 18:41:07.108256 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 12 18:41:07.108265 kernel: iommu: Default domain type: Translated Dec 12 18:41:07.108274 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:41:07.108282 kernel: efivars: Registered efivars operations Dec 12 18:41:07.108344 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:41:07.108354 kernel: PCI: System does not support PCI Dec 12 18:41:07.108363 kernel: vgaarb: loaded Dec 12 18:41:07.108373 kernel: clocksource: Switched to clocksource tsc-early Dec 12 18:41:07.108381 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:41:07.108391 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:41:07.108400 kernel: pnp: PnP ACPI init Dec 12 18:41:07.108410 kernel: pnp: PnP ACPI: found 3 devices Dec 12 18:41:07.108419 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:41:07.108428 kernel: NET: Registered PF_INET protocol family Dec 12 18:41:07.108440 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:41:07.108450 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 12 18:41:07.108459 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:41:07.108468 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:41:07.108476 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 12 18:41:07.108485 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 12 18:41:07.108494 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 12 18:41:07.108502 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 12 18:41:07.108514 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:41:07.108522 kernel: NET: Registered PF_XDP protocol family Dec 12 18:41:07.108531 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:41:07.108539 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:41:07.108548 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Dec 12 18:41:07.108558 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 12 18:41:07.108567 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 12 18:41:07.108576 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 12 18:41:07.108585 kernel: clocksource: Switched to clocksource tsc Dec 12 18:41:07.108597 kernel: Initialise system trusted keyrings Dec 12 18:41:07.108606 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 12 18:41:07.108615 kernel: Key type asymmetric registered Dec 12 18:41:07.108624 kernel: Asymmetric key parser 'x509' registered Dec 12 18:41:07.108633 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:41:07.108643 kernel: io scheduler mq-deadline registered Dec 12 18:41:07.108652 kernel: io scheduler kyber registered Dec 12 18:41:07.108659 kernel: io scheduler bfq registered Dec 12 18:41:07.108667 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:41:07.108676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:41:07.108683 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:41:07.108691 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 12 18:41:07.108698 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:41:07.108707 kernel: i8042: PNP: No PS/2 controller found. Dec 12 18:41:07.108855 kernel: rtc_cmos 00:02: registered as rtc0 Dec 12 18:41:07.108935 kernel: rtc_cmos 00:02: setting system clock to 2025-12-12T18:41:06 UTC (1765564866) Dec 12 18:41:07.109009 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 12 18:41:07.109023 kernel: intel_pstate: Intel P-state driver initializing Dec 12 18:41:07.109032 kernel: efifb: probing for efifb Dec 12 18:41:07.109042 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 12 18:41:07.109051 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 12 18:41:07.109060 kernel: efifb: scrolling: redraw Dec 12 18:41:07.109068 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:41:07.109077 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:41:07.109085 kernel: fb0: EFI VGA frame buffer device Dec 12 18:41:07.109094 kernel: pstore: Using crash dump compression: deflate Dec 12 18:41:07.109105 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:41:07.109114 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:41:07.109122 kernel: Segment Routing with IPv6 Dec 12 18:41:07.109131 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:41:07.109139 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:41:07.109148 kernel: Key type dns_resolver registered Dec 12 18:41:07.109158 kernel: IPI shorthand broadcast: enabled Dec 12 18:41:07.109167 kernel: sched_clock: Marking stable (3435005874, 120953472)->(3955053573, -399094227) Dec 12 18:41:07.109177 kernel: registered taskstats version 1 Dec 12 18:41:07.109189 kernel: Loading compiled-in X.509 certificates Dec 12 18:41:07.109199 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:41:07.109209 kernel: Demotion targets for Node 0: null Dec 12 18:41:07.109218 kernel: Key type .fscrypt registered Dec 12 18:41:07.109227 kernel: Key type fscrypt-provisioning registered Dec 12 18:41:07.109236 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:41:07.109246 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:41:07.109255 kernel: ima: No architecture policies found Dec 12 18:41:07.109264 kernel: clk: Disabling unused clocks Dec 12 18:41:07.109276 kernel: Warning: unable to open an initial console. Dec 12 18:41:07.109286 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:41:07.109315 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:41:07.109325 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:41:07.110659 kernel: Run /init as init process Dec 12 18:41:07.110677 kernel: with arguments: Dec 12 18:41:07.110686 kernel: /init Dec 12 18:41:07.110695 kernel: with environment: Dec 12 18:41:07.110704 kernel: HOME=/ Dec 12 18:41:07.110717 kernel: TERM=linux Dec 12 18:41:07.110728 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:41:07.110740 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:41:07.110750 systemd[1]: Detected virtualization microsoft. Dec 12 18:41:07.110758 systemd[1]: Detected architecture x86-64. Dec 12 18:41:07.110765 systemd[1]: Running in initrd. Dec 12 18:41:07.110774 systemd[1]: No hostname configured, using default hostname. Dec 12 18:41:07.110786 systemd[1]: Hostname set to . Dec 12 18:41:07.110796 systemd[1]: Initializing machine ID from random generator. Dec 12 18:41:07.110806 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:41:07.110816 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:41:07.110826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:41:07.110838 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:41:07.110848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:41:07.110857 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:41:07.110871 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:41:07.110882 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:41:07.110892 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:41:07.110902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:41:07.110911 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:41:07.110921 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:41:07.110930 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:41:07.110942 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:41:07.110951 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:41:07.110961 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:41:07.110971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:41:07.110979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:41:07.110989 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:41:07.110999 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:41:07.111008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:41:07.111019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:41:07.111031 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:41:07.111040 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:41:07.111050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:41:07.111060 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:41:07.111070 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:41:07.111081 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:41:07.111090 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:41:07.111101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:41:07.111123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:41:07.111136 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:41:07.111147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:41:07.111183 systemd-journald[186]: Collecting audit messages is disabled. Dec 12 18:41:07.111208 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:41:07.111221 systemd-journald[186]: Journal started Dec 12 18:41:07.111249 systemd-journald[186]: Runtime Journal (/run/log/journal/be9a7bcae3f24aadb49c961d448f9fd7) is 8M, max 158.6M, 150.6M free. Dec 12 18:41:07.117313 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:41:07.117449 systemd-modules-load[188]: Inserted module 'overlay' Dec 12 18:41:07.129313 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:41:07.132641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:41:07.139464 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:41:07.150192 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:41:07.158055 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:41:07.158114 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:41:07.161592 kernel: Bridge firewalling registered Dec 12 18:41:07.163234 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 12 18:41:07.171418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:41:07.176970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:41:07.177467 systemd-tmpfiles[203]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:41:07.180593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:41:07.187440 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:41:07.193423 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:41:07.199972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:41:07.217021 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:41:07.221477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:41:07.231161 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:41:07.232394 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:41:07.273608 systemd-resolved[236]: Positive Trust Anchors: Dec 12 18:41:07.273626 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:41:07.273670 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:41:07.293585 systemd-resolved[236]: Defaulting to hostname 'linux'. Dec 12 18:41:07.296670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:41:07.300943 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:41:07.325319 kernel: SCSI subsystem initialized Dec 12 18:41:07.333310 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:41:07.342313 kernel: iscsi: registered transport (tcp) Dec 12 18:41:07.360622 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:41:07.360674 kernel: QLogic iSCSI HBA Driver Dec 12 18:41:07.375427 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:41:07.387699 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:41:07.393673 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:41:07.425604 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:41:07.429389 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:41:07.476315 kernel: raid6: avx512x4 gen() 35506 MB/s Dec 12 18:41:07.494306 kernel: raid6: avx512x2 gen() 35976 MB/s Dec 12 18:41:07.512306 kernel: raid6: avx512x1 gen() 23988 MB/s Dec 12 18:41:07.531305 kernel: raid6: avx2x4 gen() 33744 MB/s Dec 12 18:41:07.549308 kernel: raid6: avx2x2 gen() 33975 MB/s Dec 12 18:41:07.567817 kernel: raid6: avx2x1 gen() 24868 MB/s Dec 12 18:41:07.567834 kernel: raid6: using algorithm avx512x2 gen() 35976 MB/s Dec 12 18:41:07.587729 kernel: raid6: .... xor() 30854 MB/s, rmw enabled Dec 12 18:41:07.587747 kernel: raid6: using avx512x2 recovery algorithm Dec 12 18:41:07.605314 kernel: xor: automatically using best checksumming function avx Dec 12 18:41:07.739313 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:41:07.745055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:41:07.749761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:41:07.777165 systemd-udevd[434]: Using default interface naming scheme 'v255'. Dec 12 18:41:07.781560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:41:07.788038 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:41:07.810030 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Dec 12 18:41:07.829216 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:41:07.832490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:41:07.866963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:41:07.875819 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:41:07.917308 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:41:07.933313 kernel: AES CTR mode by8 optimization enabled Dec 12 18:41:07.957281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:41:07.959538 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:41:07.967005 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:41:07.972756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:41:07.981443 kernel: hv_vmbus: Vmbus version:5.3 Dec 12 18:41:07.988244 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:41:07.990436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:41:08.001491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:41:08.007815 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 18:41:08.010511 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 18:41:08.010538 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 18:41:08.014318 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 12 18:41:08.028334 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 12 18:41:08.033321 kernel: PTP clock support registered Dec 12 18:41:08.035324 kernel: hv_vmbus: registering driver hv_pci Dec 12 18:41:08.038862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:41:08.045659 kernel: hv_utils: Registering HyperV Utility Driver Dec 12 18:41:08.045813 kernel: hv_vmbus: registering driver hid_hyperv Dec 12 18:41:08.045826 kernel: hv_vmbus: registering driver hv_utils Dec 12 18:41:08.048674 kernel: hv_vmbus: registering driver hv_netvsc Dec 12 18:41:08.053309 kernel: hv_vmbus: registering driver hv_storvsc Dec 12 18:41:08.068491 kernel: scsi host0: storvsc_host_t Dec 12 18:41:08.069251 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 12 18:41:08.069273 kernel: hv_utils: Shutdown IC version 3.2 Dec 12 18:41:08.069285 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 12 18:41:08.069613 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad4ea0b (unnamed net_device) (uninitialized): VF slot 1 added Dec 12 18:41:08.075888 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 12 18:41:08.076067 kernel: hv_utils: Heartbeat IC version 3.0 Dec 12 18:41:08.076084 kernel: hv_utils: TimeSync IC version 4.0 Dec 12 18:41:08.544096 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 12 18:41:08.544156 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 12 18:41:08.544019 systemd-resolved[236]: Clock change detected. Flushing caches. Dec 12 18:41:08.559061 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 12 18:41:08.559396 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 12 18:41:08.563009 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 12 18:41:08.567838 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 12 18:41:08.577968 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 12 18:41:08.578126 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 18:41:08.579844 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 12 18:41:08.586512 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 12 18:41:08.586778 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 12 18:41:08.596959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#87 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 18:41:08.611648 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 12 18:41:08.611860 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 12 18:41:08.621986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 18:41:08.769848 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 12 18:41:08.778845 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:41:09.080851 kernel: nvme nvme0: using unchecked data buffer Dec 12 18:41:09.264729 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 12 18:41:09.314593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 12 18:41:09.326264 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 12 18:41:09.331026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 12 18:41:09.334127 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:41:09.348777 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 12 18:41:09.349472 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:41:09.349603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:41:09.349626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:41:09.350422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:41:09.363929 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:41:09.383011 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:41:09.392882 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:41:09.400845 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:41:09.559051 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 12 18:41:09.565953 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 12 18:41:09.566126 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 12 18:41:09.568664 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 12 18:41:09.588769 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 12 18:41:09.588816 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 12 18:41:09.588846 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 12 18:41:09.588860 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 12 18:41:09.606332 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 12 18:41:09.606523 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 12 18:41:09.614215 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 12 18:41:09.618642 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 12 18:41:09.630364 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 12 18:41:09.630596 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad4ea0b eth0: VF registering: eth1 Dec 12 18:41:09.632355 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 12 18:41:09.635953 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 12 18:41:10.408087 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:41:10.408159 disk-uuid[658]: The operation has completed successfully. Dec 12 18:41:10.483314 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:41:10.483413 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:41:10.517122 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:41:10.533346 sh[696]: Success Dec 12 18:41:10.563080 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:41:10.563150 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:41:10.564779 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:41:10.574852 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:41:10.784835 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:41:10.792106 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:41:10.806801 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:41:10.821862 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (709) Dec 12 18:41:10.823862 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:41:10.823897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:41:11.170263 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:41:11.170370 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:41:11.171571 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:41:11.262323 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:41:11.265319 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:41:11.269956 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:41:11.270856 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:41:11.285144 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:41:11.310096 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (732) Dec 12 18:41:11.312850 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:41:11.314877 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:41:11.337442 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:41:11.337486 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 12 18:41:11.339362 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:41:11.346880 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:41:11.347807 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:41:11.353063 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:41:11.385165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:41:11.388984 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:41:11.427219 systemd-networkd[878]: lo: Link UP Dec 12 18:41:11.427229 systemd-networkd[878]: lo: Gained carrier Dec 12 18:41:11.429865 systemd-networkd[878]: Enumeration completed Dec 12 18:41:11.434471 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 12 18:41:11.436157 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 12 18:41:11.430307 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:41:11.438524 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad4ea0b eth0: Data path switched to VF: enP30832s1 Dec 12 18:41:11.430311 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:41:11.430572 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:41:11.438121 systemd[1]: Reached target network.target - Network. Dec 12 18:41:11.440997 systemd-networkd[878]: enP30832s1: Link UP Dec 12 18:41:11.441072 systemd-networkd[878]: eth0: Link UP Dec 12 18:41:11.444054 systemd-networkd[878]: eth0: Gained carrier Dec 12 18:41:11.444069 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:41:11.447010 systemd-networkd[878]: enP30832s1: Gained carrier Dec 12 18:41:11.461860 systemd-networkd[878]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 12 18:41:12.591643 ignition[827]: Ignition 2.22.0 Dec 12 18:41:12.591657 ignition[827]: Stage: fetch-offline Dec 12 18:41:12.594372 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:41:12.591793 ignition[827]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:41:12.599000 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:41:12.591801 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:41:12.591941 ignition[827]: parsed url from cmdline: "" Dec 12 18:41:12.591944 ignition[827]: no config URL provided Dec 12 18:41:12.591950 ignition[827]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:41:12.591956 ignition[827]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:41:12.591962 ignition[827]: failed to fetch config: resource requires networking Dec 12 18:41:12.592452 ignition[827]: Ignition finished successfully Dec 12 18:41:12.635484 ignition[888]: Ignition 2.22.0 Dec 12 18:41:12.635497 ignition[888]: Stage: fetch Dec 12 18:41:12.635731 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:41:12.635740 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:41:12.635868 ignition[888]: parsed url from cmdline: "" Dec 12 18:41:12.635872 ignition[888]: no config URL provided Dec 12 18:41:12.635878 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:41:12.635885 ignition[888]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:41:12.635910 ignition[888]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 12 18:41:12.717452 ignition[888]: GET result: OK Dec 12 18:41:12.717529 ignition[888]: config has been read from IMDS userdata Dec 12 18:41:12.717558 ignition[888]: parsing config with SHA512: e767f786d2909d43178b04fef94469bade713ca89dba2fc154a28ff9b4f7a8709cb8b27e41e5f8f1806809b15d3718bd3e65c2ea27f39563c4c959df726d693f Dec 12 18:41:12.724431 unknown[888]: fetched base config from "system" Dec 12 18:41:12.725765 unknown[888]: fetched base config from "system" Dec 12 18:41:12.726231 ignition[888]: fetch: fetch complete Dec 12 18:41:12.725834 unknown[888]: fetched user config from "azure" Dec 12 18:41:12.726235 ignition[888]: fetch: fetch passed Dec 12 18:41:12.728592 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:41:12.726287 ignition[888]: Ignition finished successfully Dec 12 18:41:12.732945 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:41:12.768094 ignition[895]: Ignition 2.22.0 Dec 12 18:41:12.768107 ignition[895]: Stage: kargs Dec 12 18:41:12.771067 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:41:12.768357 ignition[895]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:41:12.779874 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:41:12.768366 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:41:12.769498 ignition[895]: kargs: kargs passed Dec 12 18:41:12.769534 ignition[895]: Ignition finished successfully Dec 12 18:41:12.807479 ignition[902]: Ignition 2.22.0 Dec 12 18:41:12.807492 ignition[902]: Stage: disks Dec 12 18:41:12.809959 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:41:12.807721 ignition[902]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:41:12.812252 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:41:12.807730 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:41:12.815884 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:41:12.808713 ignition[902]: disks: disks passed Dec 12 18:41:12.816008 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:41:12.808754 ignition[902]: Ignition finished successfully Dec 12 18:41:12.816034 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:41:12.816057 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:41:12.817045 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:41:12.888966 systemd-fsck[911]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 12 18:41:12.893433 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:41:12.897933 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:41:13.199060 systemd-networkd[878]: eth0: Gained IPv6LL Dec 12 18:41:13.470866 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:41:13.470986 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:41:13.476371 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:41:13.493550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:41:13.497868 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:41:13.502667 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 18:41:13.503922 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:41:13.503957 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:41:13.521762 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:41:13.525887 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (920) Dec 12 18:41:13.526264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:41:13.530167 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:41:13.531020 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:41:13.541353 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:41:13.541485 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 12 18:41:13.543046 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:41:13.544291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:41:13.971871 coreos-metadata[922]: Dec 12 18:41:13.971 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 18:41:13.976397 coreos-metadata[922]: Dec 12 18:41:13.976 INFO Fetch successful Dec 12 18:41:13.976397 coreos-metadata[922]: Dec 12 18:41:13.976 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 12 18:41:13.988418 coreos-metadata[922]: Dec 12 18:41:13.988 INFO Fetch successful Dec 12 18:41:14.008846 coreos-metadata[922]: Dec 12 18:41:14.008 INFO wrote hostname ci-4459.2.2-a-7f4437f45c to /sysroot/etc/hostname Dec 12 18:41:14.012371 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:41:14.405952 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:41:14.438698 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:41:14.456126 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:41:14.461378 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:41:15.881482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:41:15.885370 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:41:15.896973 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:41:15.906183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:41:15.909225 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:41:15.937386 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:41:15.948001 ignition[1040]: INFO : Ignition 2.22.0 Dec 12 18:41:15.948001 ignition[1040]: INFO : Stage: mount Dec 12 18:41:15.955971 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:41:15.955971 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:41:15.955971 ignition[1040]: INFO : mount: mount passed Dec 12 18:41:15.955971 ignition[1040]: INFO : Ignition finished successfully Dec 12 18:41:15.950409 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:41:15.953725 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:41:15.968456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:41:15.987861 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1051) Dec 12 18:41:15.990614 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:41:15.990735 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:41:15.997634 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:41:15.997692 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 12 18:41:15.999317 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:41:16.001617 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:41:16.034517 ignition[1068]: INFO : Ignition 2.22.0 Dec 12 18:41:16.034517 ignition[1068]: INFO : Stage: files Dec 12 18:41:16.040880 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:41:16.040880 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:41:16.040880 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:41:16.040880 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:41:16.040880 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:41:16.106392 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:41:16.108492 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:41:16.108492 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:41:16.108462 unknown[1068]: wrote ssh authorized keys file for user: core Dec 12 18:41:16.126708 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:41:16.130921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:41:16.395550 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:41:16.443878 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:41:16.447924 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:41:16.447924 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 12 18:41:16.660661 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 18:41:16.962216 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:41:16.962216 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:41:16.968925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:41:16.968925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:41:16.968925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:41:16.968925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:41:16.968925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:41:16.968925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:41:16.968925 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:41:16.986982 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:41:16.986982 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:41:16.986982 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:41:16.986982 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:41:16.986982 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:41:16.986982 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:41:17.377878 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 18:41:18.454327 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:41:18.454327 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 18:41:18.495617 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:41:18.504169 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:41:18.504169 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 18:41:18.504169 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:41:18.518061 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:41:18.518061 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:41:18.518061 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:41:18.518061 ignition[1068]: INFO : files: files passed Dec 12 18:41:18.518061 ignition[1068]: INFO : Ignition finished successfully Dec 12 18:41:18.508867 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:41:18.512605 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:41:18.534542 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:41:18.539660 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:41:18.542538 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:41:18.552997 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:41:18.555758 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:41:18.558960 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:41:18.558957 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:41:18.561271 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:41:18.568688 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:41:18.633410 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:41:18.633514 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:41:18.638359 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:41:18.641040 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:41:18.644938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:41:18.646583 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:41:18.674130 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:41:18.676363 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:41:18.697413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:41:18.702270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:41:18.702879 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:41:18.703123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:41:18.703251 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:41:18.711948 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:41:18.716002 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:41:18.720021 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:41:18.724004 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:41:18.728009 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:41:18.731991 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:41:18.736009 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:41:18.740004 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:41:18.742637 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:41:18.747016 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:41:18.750992 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:41:18.754971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:41:18.755126 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:41:18.759193 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:41:18.763005 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:41:18.765686 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:41:18.766087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:41:18.769982 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:41:18.770147 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:41:18.776957 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:41:18.777132 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:41:18.781527 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:41:18.781676 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:41:18.784324 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 18:41:18.784449 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:41:18.789401 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:41:18.798409 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:41:18.806962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:41:18.825871 ignition[1122]: INFO : Ignition 2.22.0 Dec 12 18:41:18.825871 ignition[1122]: INFO : Stage: umount Dec 12 18:41:18.825871 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:41:18.825871 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:41:18.825871 ignition[1122]: INFO : umount: umount passed Dec 12 18:41:18.825871 ignition[1122]: INFO : Ignition finished successfully Dec 12 18:41:18.809577 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:41:18.816084 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:41:18.816202 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:41:18.829514 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:41:18.829622 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:41:18.834230 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:41:18.834325 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:41:18.840709 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:41:18.840774 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:41:18.846738 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:41:18.846793 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:41:18.849929 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:41:18.849974 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:41:18.853907 systemd[1]: Stopped target network.target - Network. Dec 12 18:41:18.857913 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:41:18.857975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:41:18.863583 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:41:18.866916 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:41:18.871880 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:41:18.874184 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:41:18.875790 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:41:18.880275 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:41:18.880988 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:41:18.884773 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:41:18.885172 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:41:18.887968 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:41:18.888505 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:41:18.892792 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:41:18.894079 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:41:18.896983 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:41:18.901469 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:41:18.904662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:41:18.907406 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:41:18.907504 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:41:18.915407 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:41:18.915695 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:41:18.915815 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:41:18.919251 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:41:18.920086 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:41:18.923920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:41:18.923965 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:41:18.927946 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:41:18.931205 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:41:18.985884 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad4ea0b eth0: Data path switched from VF: enP30832s1 Dec 12 18:41:18.932545 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:41:18.939953 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:41:18.940014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:41:19.002139 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 12 18:41:18.943979 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:41:18.944027 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:41:18.946933 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:41:18.946981 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:41:18.953326 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:41:18.961369 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:41:18.961442 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:41:18.968822 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:41:18.978013 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:41:18.984583 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:41:18.985236 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:41:18.990284 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:41:18.990533 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:41:18.996918 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:41:18.996976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:41:19.001270 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:41:19.001336 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:41:19.002699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:41:19.002753 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:41:19.012971 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:41:19.016907 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:41:19.016973 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:41:19.022458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:41:19.022512 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:41:19.026074 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:41:19.026983 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:41:19.037170 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:41:19.037230 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:41:19.043914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:41:19.043967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:41:19.049184 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:41:19.049245 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 18:41:19.049281 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:41:19.049323 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:41:19.049680 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:41:19.049773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:41:19.054141 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:41:19.054249 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:41:19.088406 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:41:19.088540 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:41:19.093148 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:41:19.097195 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:41:19.097267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:41:19.103444 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:41:19.146108 systemd[1]: Switching root. Dec 12 18:41:19.210754 systemd-journald[186]: Journal stopped Dec 12 18:41:27.518895 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 12 18:41:27.518934 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:41:27.518950 kernel: SELinux: policy capability open_perms=1 Dec 12 18:41:27.518958 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:41:27.518965 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:41:27.518972 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:41:27.518980 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:41:27.518988 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:41:27.518997 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:41:27.519004 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:41:27.519012 kernel: audit: type=1403 audit(1765564883.312:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:41:27.519021 systemd[1]: Successfully loaded SELinux policy in 547.862ms. Dec 12 18:41:27.519030 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.959ms. Dec 12 18:41:27.519040 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:41:27.519051 systemd[1]: Detected virtualization microsoft. Dec 12 18:41:27.519059 systemd[1]: Detected architecture x86-64. Dec 12 18:41:27.519067 systemd[1]: Detected first boot. Dec 12 18:41:27.519075 systemd[1]: Hostname set to . Dec 12 18:41:27.519083 systemd[1]: Initializing machine ID from random generator. Dec 12 18:41:27.519091 zram_generator::config[1166]: No configuration found. Dec 12 18:41:27.519102 kernel: Guest personality initialized and is inactive Dec 12 18:41:27.519110 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 12 18:41:27.519117 kernel: Initialized host personality Dec 12 18:41:27.519124 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:41:27.519132 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:41:27.519142 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:41:27.519150 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:41:27.519158 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:41:27.519167 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:41:27.519175 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:41:27.519184 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:41:27.519192 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:41:27.519199 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:41:27.519208 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:41:27.519216 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:41:27.519226 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:41:27.519234 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:41:27.519242 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:41:27.519257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:41:27.519271 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:41:27.519288 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:41:27.519297 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:41:27.519306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:41:27.519317 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:41:27.519325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:41:27.519333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:41:27.519342 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:41:27.519350 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:41:27.519358 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:41:27.519367 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:41:27.519377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:41:27.519388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:41:27.519404 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:41:27.519422 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:41:27.519433 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:41:27.519442 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:41:27.519452 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:41:27.519461 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:41:27.519469 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:41:27.519477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:41:27.519486 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:41:27.519494 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:41:27.519503 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:41:27.519520 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:41:27.519538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:27.519552 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:41:27.519567 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:41:27.519576 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:41:27.519585 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:41:27.519593 systemd[1]: Reached target machines.target - Containers. Dec 12 18:41:27.519602 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:41:27.519610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:41:27.519620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:41:27.519628 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:41:27.519637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:41:27.519645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:41:27.519653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:41:27.519661 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:41:27.519669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:41:27.519683 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:41:27.519702 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:41:27.519714 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:41:27.519722 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:41:27.519731 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:41:27.519739 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:41:27.519748 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:41:27.519756 kernel: loop: module loaded Dec 12 18:41:27.519769 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:41:27.519786 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:41:27.519798 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:41:27.519807 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:41:27.519815 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:41:27.519897 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:41:27.519907 systemd[1]: Stopped verity-setup.service. Dec 12 18:41:27.519921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:27.519936 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:41:27.519951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:41:27.519965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:41:27.519973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:41:27.519981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:41:27.519989 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:41:27.519997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:41:27.520006 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:41:27.520014 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:41:27.520022 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:41:27.520033 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:41:27.520042 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:41:27.520050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:41:27.520061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:41:27.520069 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:41:27.520077 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:41:27.520085 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:41:27.520121 systemd-journald[1252]: Collecting audit messages is disabled. Dec 12 18:41:27.520145 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:41:27.520153 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:41:27.520162 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:41:27.520170 kernel: fuse: init (API version 7.41) Dec 12 18:41:27.520186 systemd-journald[1252]: Journal started Dec 12 18:41:27.520227 systemd-journald[1252]: Runtime Journal (/run/log/journal/2e48497efa0b47749225dfbfbd71bf93) is 8M, max 158.6M, 150.6M free. Dec 12 18:41:26.119476 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:41:26.131416 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 12 18:41:26.131780 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:41:27.502167 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Dec 12 18:41:27.502182 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Dec 12 18:41:27.526890 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:41:27.529348 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:41:27.533216 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:41:27.533806 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:41:27.538869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:41:27.541614 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:41:27.546881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:41:27.563712 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:41:27.569944 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:41:27.573598 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:41:27.577010 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:41:27.577054 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:41:27.580904 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:41:27.585402 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:41:27.587136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:41:27.590042 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:41:27.596980 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:41:27.599220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:41:27.604993 kernel: ACPI: bus type drm_connector registered Dec 12 18:41:27.604969 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:41:27.609977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:41:27.617970 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:41:27.622096 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:41:27.628354 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:41:27.628547 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:41:27.630812 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:41:27.633667 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:41:27.643775 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:41:27.647059 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:41:27.652996 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:41:27.674860 kernel: loop0: detected capacity change from 0 to 27936 Dec 12 18:41:27.693263 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:41:27.693871 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:41:27.705316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:41:27.726794 systemd-journald[1252]: Time spent on flushing to /var/log/journal/2e48497efa0b47749225dfbfbd71bf93 is 19.060ms for 1009 entries. Dec 12 18:41:27.726794 systemd-journald[1252]: System Journal (/var/log/journal/2e48497efa0b47749225dfbfbd71bf93) is 8M, max 2.6G, 2.6G free. Dec 12 18:41:27.767220 systemd-journald[1252]: Received client request to flush runtime journal. Dec 12 18:41:27.769288 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:41:27.789230 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:41:27.793634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:41:27.820002 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 12 18:41:27.820021 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 12 18:41:27.823586 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:41:28.053852 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:41:28.101853 kernel: loop1: detected capacity change from 0 to 224512 Dec 12 18:41:28.158872 kernel: loop2: detected capacity change from 0 to 128560 Dec 12 18:41:28.919151 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:41:28.922455 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:41:28.952855 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Dec 12 18:41:29.663822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:41:29.671904 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:41:29.683858 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:41:29.741020 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:41:29.744557 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:41:29.872384 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:41:29.882883 kernel: hv_vmbus: registering driver hyperv_fb Dec 12 18:41:29.884852 kernel: hv_vmbus: registering driver hv_balloon Dec 12 18:41:29.891851 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 12 18:41:29.919852 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:41:29.941888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#84 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 18:41:29.947411 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 12 18:41:29.947430 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 12 18:41:29.951181 kernel: Console: switching to colour dummy device 80x25 Dec 12 18:41:29.958868 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:41:29.996852 kernel: loop4: detected capacity change from 0 to 27936 Dec 12 18:41:31.036081 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 12 18:41:31.047009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:41:31.448364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:41:31.448761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:41:31.453345 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:41:31.455057 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:41:31.468956 systemd-networkd[1344]: lo: Link UP Dec 12 18:41:31.468960 systemd-networkd[1344]: lo: Gained carrier Dec 12 18:41:31.470452 systemd-networkd[1344]: Enumeration completed Dec 12 18:41:31.472932 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:41:31.473893 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:41:31.473901 systemd-networkd[1344]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:41:31.475860 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 12 18:41:31.476136 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:41:31.483185 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 12 18:41:31.482517 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:41:31.487656 systemd-networkd[1344]: enP30832s1: Link UP Dec 12 18:41:31.487750 systemd-networkd[1344]: eth0: Link UP Dec 12 18:41:31.487754 systemd-networkd[1344]: eth0: Gained carrier Dec 12 18:41:31.487773 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:41:31.487872 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad4ea0b eth0: Data path switched to VF: enP30832s1 Dec 12 18:41:31.497015 systemd-networkd[1344]: enP30832s1: Gained carrier Dec 12 18:41:31.505859 systemd-networkd[1344]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 12 18:41:31.610740 kernel: loop5: detected capacity change from 0 to 224512 Dec 12 18:41:32.220970 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:41:32.244847 kernel: loop6: detected capacity change from 0 to 128560 Dec 12 18:41:32.245256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 12 18:41:32.247945 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:41:32.627571 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:41:32.632147 kernel: loop7: detected capacity change from 0 to 110984 Dec 12 18:41:32.652864 (sd-merge)[1401]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 12 18:41:32.653321 (sd-merge)[1401]: Merged extensions into '/usr'. Dec 12 18:41:32.658179 systemd[1]: Reload requested from client PID 1310 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:41:32.658196 systemd[1]: Reloading... Dec 12 18:41:32.725948 zram_generator::config[1458]: No configuration found. Dec 12 18:41:32.956072 systemd[1]: Reloading finished in 297 ms. Dec 12 18:41:32.984937 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:41:32.989208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:41:32.999661 systemd[1]: Starting ensure-sysext.service... Dec 12 18:41:33.003039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:41:33.024415 systemd[1]: Reload requested from client PID 1522 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:41:33.024437 systemd[1]: Reloading... Dec 12 18:41:33.089847 zram_generator::config[1557]: No configuration found. Dec 12 18:41:33.230937 systemd-networkd[1344]: eth0: Gained IPv6LL Dec 12 18:41:33.410654 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:41:33.410690 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:41:33.411049 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:41:33.411356 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:41:33.412100 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:41:33.412360 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Dec 12 18:41:33.412439 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Dec 12 18:41:33.557387 systemd[1]: Reloading finished in 532 ms. Dec 12 18:41:33.570060 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:41:33.586985 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:33.587143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:41:33.588208 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:41:33.593146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:41:33.599073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:41:33.601058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:41:33.601204 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:41:33.601328 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:33.604700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:41:33.610149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:41:33.618439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:41:33.618597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:41:33.622205 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:41:33.622347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:41:33.626400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:33.626629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:41:33.626773 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:41:33.627052 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:41:33.627179 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:41:33.627297 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:41:33.627406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:33.631282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:33.631508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:41:33.632532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:41:33.650792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:41:33.661245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:41:33.667400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:41:33.672010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:41:33.672145 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:41:33.672389 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:41:33.673672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:41:33.674875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:41:33.676071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:41:33.678584 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:41:33.678720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:41:33.682237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:41:33.682375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:41:33.684079 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:41:33.684235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:41:33.689974 systemd[1]: Finished ensure-sysext.service. Dec 12 18:41:33.692703 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:41:33.692745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:41:33.895941 systemd-tmpfiles[1523]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:41:33.895953 systemd-tmpfiles[1523]: Skipping /boot Dec 12 18:41:33.964930 systemd-tmpfiles[1523]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:41:33.964945 systemd-tmpfiles[1523]: Skipping /boot Dec 12 18:41:34.559953 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:41:34.564627 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:41:34.569999 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:41:34.575168 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:41:34.585593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:41:34.588757 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:41:34.965391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:41:34.990124 systemd-resolved[1633]: Positive Trust Anchors: Dec 12 18:41:34.990139 systemd-resolved[1633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:41:34.990176 systemd-resolved[1633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:41:35.008808 systemd-resolved[1633]: Using system hostname 'ci-4459.2.2-a-7f4437f45c'. Dec 12 18:41:35.010665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:41:35.014076 systemd[1]: Reached target network.target - Network. Dec 12 18:41:35.016965 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:41:35.020955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:41:35.042494 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:41:35.104076 augenrules[1657]: No rules Dec 12 18:41:35.105426 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:41:35.105645 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:41:36.676764 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:41:36.680136 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:41:39.395012 ldconfig[1305]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:41:39.405931 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:41:39.411589 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:41:39.432861 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:41:39.434433 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:41:39.438029 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:41:39.439581 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:41:39.442913 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:41:39.446089 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:41:39.448985 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:41:39.451902 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:41:39.454913 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:41:39.454959 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:41:39.456924 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:41:39.461229 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:41:39.465984 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:41:39.470750 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:41:39.474090 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:41:39.476934 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:41:39.492358 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:41:39.495555 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:41:39.498500 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:41:39.504088 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:41:39.506897 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:41:39.509941 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:41:39.509969 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:41:39.512161 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 18:41:39.514399 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:41:39.529083 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:41:39.534033 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:41:39.538018 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:41:39.544537 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:41:39.548978 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:41:39.551954 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:41:39.558970 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:41:39.560939 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 12 18:41:39.563125 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 12 18:41:39.565204 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 12 18:41:39.567043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:41:39.574070 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:41:39.576797 jq[1674]: false Dec 12 18:41:39.577455 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:41:39.583003 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:41:39.592412 extend-filesystems[1675]: Found /dev/nvme0n1p6 Dec 12 18:41:39.594121 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:41:39.599055 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:41:39.607504 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:41:39.609108 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Refreshing passwd entry cache Dec 12 18:41:39.609381 oslogin_cache_refresh[1676]: Refreshing passwd entry cache Dec 12 18:41:39.612781 KVP[1677]: KVP starting; pid is:1677 Dec 12 18:41:39.613879 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:41:39.621553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:41:39.627580 kernel: hv_utils: KVP IC version 4.0 Dec 12 18:41:39.625314 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:41:39.623380 KVP[1677]: KVP LIC Version: 3.1 Dec 12 18:41:39.630856 extend-filesystems[1675]: Found /dev/nvme0n1p9 Dec 12 18:41:39.629550 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:41:39.636614 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:41:39.640219 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:41:39.640451 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:41:39.641105 extend-filesystems[1675]: Checking size of /dev/nvme0n1p9 Dec 12 18:41:39.647635 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Failure getting users, quitting Dec 12 18:41:39.647635 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:41:39.647635 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Refreshing group entry cache Dec 12 18:41:39.646060 oslogin_cache_refresh[1676]: Failure getting users, quitting Dec 12 18:41:39.646080 oslogin_cache_refresh[1676]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:41:39.646134 oslogin_cache_refresh[1676]: Refreshing group entry cache Dec 12 18:41:39.650053 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:41:39.652034 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:41:39.667423 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Failure getting groups, quitting Dec 12 18:41:39.667423 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:41:39.667399 oslogin_cache_refresh[1676]: Failure getting groups, quitting Dec 12 18:41:39.667411 oslogin_cache_refresh[1676]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:41:39.671253 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:41:39.671926 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:41:39.692475 jq[1694]: true Dec 12 18:41:39.704818 extend-filesystems[1675]: Old size kept for /dev/nvme0n1p9 Dec 12 18:41:39.706786 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:41:39.707042 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:41:39.711259 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:41:39.736323 chronyd[1669]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 18:41:39.744149 jq[1718]: true Dec 12 18:41:39.746164 (ntainerd)[1725]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:41:39.752889 update_engine[1692]: I20251212 18:41:39.751058 1692 main.cc:92] Flatcar Update Engine starting Dec 12 18:41:39.762146 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:41:39.762613 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:41:39.768899 tar[1701]: linux-amd64/LICENSE Dec 12 18:41:39.770896 chronyd[1669]: Timezone right/UTC failed leap second check, ignoring Dec 12 18:41:39.771081 chronyd[1669]: Loaded seccomp filter (level 2) Dec 12 18:41:39.771234 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 18:41:39.772400 tar[1701]: linux-amd64/helm Dec 12 18:41:39.835223 dbus-daemon[1672]: [system] SELinux support is enabled Dec 12 18:41:39.835650 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:41:39.842410 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:41:39.842455 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:41:39.845010 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:41:39.845040 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:41:39.855692 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:41:39.857600 update_engine[1692]: I20251212 18:41:39.856866 1692 update_check_scheduler.cc:74] Next update check in 3m38s Dec 12 18:41:39.863075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:41:39.880665 systemd-logind[1691]: New seat seat0. Dec 12 18:41:39.882629 systemd-logind[1691]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:41:39.882919 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:41:39.886855 bash[1749]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:41:39.890589 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:41:39.896334 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 18:41:39.952993 coreos-metadata[1671]: Dec 12 18:41:39.952 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 18:41:39.961004 coreos-metadata[1671]: Dec 12 18:41:39.960 INFO Fetch successful Dec 12 18:41:39.961249 coreos-metadata[1671]: Dec 12 18:41:39.961 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 12 18:41:39.966755 coreos-metadata[1671]: Dec 12 18:41:39.966 INFO Fetch successful Dec 12 18:41:39.966755 coreos-metadata[1671]: Dec 12 18:41:39.966 INFO Fetching http://168.63.129.16/machine/bba8f86e-3326-4301-b3f4-fe172cf7e9db/1d7a1b39%2D2cd7%2D4e03%2D9e55%2D7e7436f7ca6d.%5Fci%2D4459.2.2%2Da%2D7f4437f45c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 12 18:41:39.970748 coreos-metadata[1671]: Dec 12 18:41:39.969 INFO Fetch successful Dec 12 18:41:39.970748 coreos-metadata[1671]: Dec 12 18:41:39.969 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 12 18:41:39.985593 coreos-metadata[1671]: Dec 12 18:41:39.985 INFO Fetch successful Dec 12 18:41:40.033533 sshd_keygen[1722]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:41:40.078157 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:41:40.080621 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:41:40.124757 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:41:40.132575 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:41:40.137573 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 12 18:41:40.201397 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:41:40.202887 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:41:40.248541 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:41:40.259221 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 12 18:41:40.303079 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:41:40.308596 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:41:40.314601 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:41:40.316754 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:41:40.333304 locksmithd[1752]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:41:40.441860 tar[1701]: linux-amd64/README.md Dec 12 18:41:40.458726 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:41:40.961845 containerd[1725]: time="2025-12-12T18:41:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:41:40.961845 containerd[1725]: time="2025-12-12T18:41:40.961426179Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:41:40.972969 containerd[1725]: time="2025-12-12T18:41:40.972928110Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.057µs" Dec 12 18:41:40.972969 containerd[1725]: time="2025-12-12T18:41:40.972957767Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:41:40.973083 containerd[1725]: time="2025-12-12T18:41:40.972977577Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:41:40.973137 containerd[1725]: time="2025-12-12T18:41:40.973119526Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:41:40.973171 containerd[1725]: time="2025-12-12T18:41:40.973137235Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:41:40.973171 containerd[1725]: time="2025-12-12T18:41:40.973163559Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973228 containerd[1725]: time="2025-12-12T18:41:40.973214142Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973250 containerd[1725]: time="2025-12-12T18:41:40.973224993Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973481 containerd[1725]: time="2025-12-12T18:41:40.973461253Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973481 containerd[1725]: time="2025-12-12T18:41:40.973477285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973541 containerd[1725]: time="2025-12-12T18:41:40.973488422Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973541 containerd[1725]: time="2025-12-12T18:41:40.973497240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973582 containerd[1725]: time="2025-12-12T18:41:40.973564185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973735 containerd[1725]: time="2025-12-12T18:41:40.973718267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973768 containerd[1725]: time="2025-12-12T18:41:40.973744320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:41:40.973768 containerd[1725]: time="2025-12-12T18:41:40.973755722Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:41:40.973810 containerd[1725]: time="2025-12-12T18:41:40.973786654Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:41:40.974912 containerd[1725]: time="2025-12-12T18:41:40.974020592Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:41:40.974912 containerd[1725]: time="2025-12-12T18:41:40.974071129Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992262281Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992321460Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992339358Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992354866Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992369810Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992381720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992398846Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992420680Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992434497Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992446760Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992458067Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992472508Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992598403Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:41:40.993423 containerd[1725]: time="2025-12-12T18:41:40.992616180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992630994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992655028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992668608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992681786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992694745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992706517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992724455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992737541Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992750246Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992803557Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992817491Z" level=info msg="Start snapshots syncer" Dec 12 18:41:40.993879 containerd[1725]: time="2025-12-12T18:41:40.992875515Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:41:40.994168 containerd[1725]: time="2025-12-12T18:41:40.993187362Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:41:40.994168 containerd[1725]: time="2025-12-12T18:41:40.993249507Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993290394Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993384509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993403544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993415816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993426518Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993441974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993453676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993466191Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993490930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993509240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993522367Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993552287Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993569062Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:41:40.994330 containerd[1725]: time="2025-12-12T18:41:40.993579458Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993590873Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993599817Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993612066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993630582Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993649175Z" level=info msg="runtime interface created" Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993655111Z" level=info msg="created NRI interface" Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993664829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993676945Z" level=info msg="Connect containerd service" Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.993698783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:41:40.994648 containerd[1725]: time="2025-12-12T18:41:40.994526808Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:41:41.287575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:41:41.297181 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:41:41.434967 containerd[1725]: time="2025-12-12T18:41:41.434921571Z" level=info msg="Start subscribing containerd event" Dec 12 18:41:41.435070 containerd[1725]: time="2025-12-12T18:41:41.434982696Z" level=info msg="Start recovering state" Dec 12 18:41:41.435120 containerd[1725]: time="2025-12-12T18:41:41.435101350Z" level=info msg="Start event monitor" Dec 12 18:41:41.435147 containerd[1725]: time="2025-12-12T18:41:41.435125042Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:41:41.435147 containerd[1725]: time="2025-12-12T18:41:41.435134706Z" level=info msg="Start streaming server" Dec 12 18:41:41.435209 containerd[1725]: time="2025-12-12T18:41:41.435145778Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:41:41.435209 containerd[1725]: time="2025-12-12T18:41:41.435155521Z" level=info msg="runtime interface starting up..." Dec 12 18:41:41.435209 containerd[1725]: time="2025-12-12T18:41:41.435163546Z" level=info msg="starting plugins..." Dec 12 18:41:41.435209 containerd[1725]: time="2025-12-12T18:41:41.435178484Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:41:41.436567 containerd[1725]: time="2025-12-12T18:41:41.436535852Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:41:41.436636 containerd[1725]: time="2025-12-12T18:41:41.436603206Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:41:41.436776 containerd[1725]: time="2025-12-12T18:41:41.436705832Z" level=info msg="containerd successfully booted in 0.478163s" Dec 12 18:41:41.436882 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:41:41.439696 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:41:41.441745 systemd[1]: Startup finished in 3.587s (kernel) + 15.557s (initrd) + 18.676s (userspace) = 37.820s. Dec 12 18:41:41.671224 login[1808]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 18:41:41.677121 login[1810]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 18:41:41.687064 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:41:41.688647 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:41:41.706149 systemd-logind[1691]: New session 2 of user core. Dec 12 18:41:41.714170 systemd-logind[1691]: New session 1 of user core. Dec 12 18:41:41.725724 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:41:41.730397 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:41:41.745322 (systemd)[1850]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:41:41.747696 systemd-logind[1691]: New session c1 of user core. Dec 12 18:41:42.018427 waagent[1804]: 2025-12-12T18:41:42.018272Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 12 18:41:42.023319 waagent[1804]: 2025-12-12T18:41:42.022283Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 12 18:41:42.026888 waagent[1804]: 2025-12-12T18:41:42.024071Z INFO Daemon Daemon Python: 3.11.13 Dec 12 18:41:42.026888 waagent[1804]: 2025-12-12T18:41:42.025552Z INFO Daemon Daemon Run daemon Dec 12 18:41:42.028327 waagent[1804]: 2025-12-12T18:41:42.027014Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 12 18:41:42.030856 waagent[1804]: 2025-12-12T18:41:42.027423Z INFO Daemon Daemon Using waagent for provisioning Dec 12 18:41:42.031442 waagent[1804]: 2025-12-12T18:41:42.031393Z INFO Daemon Daemon Activate resource disk Dec 12 18:41:42.032894 waagent[1804]: 2025-12-12T18:41:42.032766Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 12 18:41:42.037148 waagent[1804]: 2025-12-12T18:41:42.037092Z INFO Daemon Daemon Found device: None Dec 12 18:41:42.039899 waagent[1804]: 2025-12-12T18:41:42.038493Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 12 18:41:42.040900 waagent[1804]: 2025-12-12T18:41:42.039418Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 12 18:41:42.046851 waagent[1804]: 2025-12-12T18:41:42.045209Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 18:41:42.046648 systemd[1850]: Queued start job for default target default.target. Dec 12 18:41:42.047173 waagent[1804]: 2025-12-12T18:41:42.046139Z INFO Daemon Daemon Running default provisioning handler Dec 12 18:41:42.053075 systemd[1850]: Created slice app.slice - User Application Slice. Dec 12 18:41:42.053410 systemd[1850]: Reached target paths.target - Paths. Dec 12 18:41:42.053529 systemd[1850]: Reached target timers.target - Timers. Dec 12 18:41:42.056941 systemd[1850]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:41:42.059846 waagent[1804]: 2025-12-12T18:41:42.059436Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 12 18:41:42.063734 waagent[1804]: 2025-12-12T18:41:42.063690Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 12 18:41:42.064750 waagent[1804]: 2025-12-12T18:41:42.064355Z INFO Daemon Daemon cloud-init is enabled: False Dec 12 18:41:42.067853 waagent[1804]: 2025-12-12T18:41:42.066980Z INFO Daemon Daemon Copying ovf-env.xml Dec 12 18:41:42.079636 systemd[1850]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:41:42.079746 systemd[1850]: Reached target sockets.target - Sockets. Dec 12 18:41:42.079789 systemd[1850]: Reached target basic.target - Basic System. Dec 12 18:41:42.079885 systemd[1850]: Reached target default.target - Main User Target. Dec 12 18:41:42.079911 systemd[1850]: Startup finished in 325ms. Dec 12 18:41:42.081016 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:41:42.087572 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:41:42.088252 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:41:42.123080 waagent[1804]: 2025-12-12T18:41:42.123029Z INFO Daemon Daemon Successfully mounted dvd Dec 12 18:41:42.156664 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 12 18:41:42.161742 waagent[1804]: 2025-12-12T18:41:42.161586Z INFO Daemon Daemon Detect protocol endpoint Dec 12 18:41:42.163214 waagent[1804]: 2025-12-12T18:41:42.162000Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 18:41:42.171680 waagent[1804]: 2025-12-12T18:41:42.163370Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 12 18:41:42.171680 waagent[1804]: 2025-12-12T18:41:42.163488Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 12 18:41:42.171680 waagent[1804]: 2025-12-12T18:41:42.163664Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 12 18:41:42.171680 waagent[1804]: 2025-12-12T18:41:42.163855Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 12 18:41:42.188339 waagent[1804]: 2025-12-12T18:41:42.184423Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 12 18:41:42.188339 waagent[1804]: 2025-12-12T18:41:42.186774Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 12 18:41:42.188632 waagent[1804]: 2025-12-12T18:41:42.188592Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 12 18:41:42.200577 kubelet[1835]: E1212 18:41:42.200546 1835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:41:42.202620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:41:42.202726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:41:42.203205 systemd[1]: kubelet.service: Consumed 1.116s CPU time, 266.9M memory peak. Dec 12 18:41:42.293082 waagent[1804]: 2025-12-12T18:41:42.292977Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 12 18:41:42.293559 waagent[1804]: 2025-12-12T18:41:42.293300Z INFO Daemon Daemon Forcing an update of the goal state. Dec 12 18:41:42.298995 waagent[1804]: 2025-12-12T18:41:42.298959Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 18:41:42.318434 waagent[1804]: 2025-12-12T18:41:42.318399Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Dec 12 18:41:42.319732 waagent[1804]: 2025-12-12T18:41:42.319695Z INFO Daemon Dec 12 18:41:42.320256 waagent[1804]: 2025-12-12T18:41:42.320079Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9288edb2-5f35-439b-b40b-6037b0604618 eTag: 5497443008290172382 source: Fabric] Dec 12 18:41:42.322368 waagent[1804]: 2025-12-12T18:41:42.322333Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 12 18:41:42.323789 waagent[1804]: 2025-12-12T18:41:42.323755Z INFO Daemon Dec 12 18:41:42.324093 waagent[1804]: 2025-12-12T18:41:42.324063Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 12 18:41:42.331379 waagent[1804]: 2025-12-12T18:41:42.331345Z INFO Daemon Daemon Downloading artifacts profile blob Dec 12 18:41:42.406685 waagent[1804]: 2025-12-12T18:41:42.406631Z INFO Daemon Downloaded certificate {'thumbprint': '04C0732EA368DF90E4EE99737318F2A8DE15711F', 'hasPrivateKey': True} Dec 12 18:41:42.407652 waagent[1804]: 2025-12-12T18:41:42.407269Z INFO Daemon Fetch goal state completed Dec 12 18:41:42.415128 waagent[1804]: 2025-12-12T18:41:42.415081Z INFO Daemon Daemon Starting provisioning Dec 12 18:41:42.415845 waagent[1804]: 2025-12-12T18:41:42.415340Z INFO Daemon Daemon Handle ovf-env.xml. Dec 12 18:41:42.415845 waagent[1804]: 2025-12-12T18:41:42.415418Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-7f4437f45c] Dec 12 18:41:42.418349 waagent[1804]: 2025-12-12T18:41:42.417850Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-7f4437f45c] Dec 12 18:41:42.418349 waagent[1804]: 2025-12-12T18:41:42.418184Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 12 18:41:42.420803 waagent[1804]: 2025-12-12T18:41:42.418501Z INFO Daemon Daemon Primary interface is [eth0] Dec 12 18:41:42.426412 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:41:42.426419 systemd-networkd[1344]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:41:42.426476 systemd-networkd[1344]: eth0: DHCP lease lost Dec 12 18:41:42.427400 waagent[1804]: 2025-12-12T18:41:42.427347Z INFO Daemon Daemon Create user account if not exists Dec 12 18:41:42.428106 waagent[1804]: 2025-12-12T18:41:42.427691Z INFO Daemon Daemon User core already exists, skip useradd Dec 12 18:41:42.428106 waagent[1804]: 2025-12-12T18:41:42.427914Z INFO Daemon Daemon Configure sudoer Dec 12 18:41:42.437189 waagent[1804]: 2025-12-12T18:41:42.436150Z INFO Daemon Daemon Configure sshd Dec 12 18:41:42.441089 waagent[1804]: 2025-12-12T18:41:42.441044Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 12 18:41:42.441573 waagent[1804]: 2025-12-12T18:41:42.441308Z INFO Daemon Daemon Deploy ssh public key. Dec 12 18:41:42.443063 systemd-networkd[1344]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 12 18:41:43.529309 waagent[1804]: 2025-12-12T18:41:43.529258Z INFO Daemon Daemon Provisioning complete Dec 12 18:41:43.539595 waagent[1804]: 2025-12-12T18:41:43.539561Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 12 18:41:43.540158 waagent[1804]: 2025-12-12T18:41:43.539925Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 12 18:41:43.541419 waagent[1804]: 2025-12-12T18:41:43.540203Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 12 18:41:43.653559 waagent[1902]: 2025-12-12T18:41:43.653480Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 12 18:41:43.653862 waagent[1902]: 2025-12-12T18:41:43.653588Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 12 18:41:43.653862 waagent[1902]: 2025-12-12T18:41:43.653634Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 12 18:41:43.653862 waagent[1902]: 2025-12-12T18:41:43.653680Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 12 18:41:43.686485 waagent[1902]: 2025-12-12T18:41:43.686424Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 12 18:41:43.686649 waagent[1902]: 2025-12-12T18:41:43.686615Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 18:41:43.686719 waagent[1902]: 2025-12-12T18:41:43.686683Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 18:41:43.693292 waagent[1902]: 2025-12-12T18:41:43.693229Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 18:41:43.698215 waagent[1902]: 2025-12-12T18:41:43.698174Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Dec 12 18:41:43.698575 waagent[1902]: 2025-12-12T18:41:43.698540Z INFO ExtHandler Dec 12 18:41:43.698620 waagent[1902]: 2025-12-12T18:41:43.698595Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 38f4c675-44c9-42fa-a85e-f8079571b950 eTag: 5497443008290172382 source: Fabric] Dec 12 18:41:43.698818 waagent[1902]: 2025-12-12T18:41:43.698791Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 12 18:41:43.699204 waagent[1902]: 2025-12-12T18:41:43.699170Z INFO ExtHandler Dec 12 18:41:43.699246 waagent[1902]: 2025-12-12T18:41:43.699217Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 12 18:41:43.703348 waagent[1902]: 2025-12-12T18:41:43.703314Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 12 18:41:43.799236 waagent[1902]: 2025-12-12T18:41:43.799147Z INFO ExtHandler Downloaded certificate {'thumbprint': '04C0732EA368DF90E4EE99737318F2A8DE15711F', 'hasPrivateKey': True} Dec 12 18:41:43.799558 waagent[1902]: 2025-12-12T18:41:43.799526Z INFO ExtHandler Fetch goal state completed Dec 12 18:41:43.814068 waagent[1902]: 2025-12-12T18:41:43.814015Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 12 18:41:43.818441 waagent[1902]: 2025-12-12T18:41:43.818387Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1902 Dec 12 18:41:43.818539 waagent[1902]: 2025-12-12T18:41:43.818513Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 12 18:41:43.818785 waagent[1902]: 2025-12-12T18:41:43.818760Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 12 18:41:43.819796 waagent[1902]: 2025-12-12T18:41:43.819763Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 12 18:41:43.820116 waagent[1902]: 2025-12-12T18:41:43.820083Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 12 18:41:43.820225 waagent[1902]: 2025-12-12T18:41:43.820201Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 12 18:41:43.820611 waagent[1902]: 2025-12-12T18:41:43.820584Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 12 18:41:43.850965 waagent[1902]: 2025-12-12T18:41:43.850932Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 12 18:41:43.851112 waagent[1902]: 2025-12-12T18:41:43.851088Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 12 18:41:43.856861 waagent[1902]: 2025-12-12T18:41:43.856725Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 12 18:41:43.862286 systemd[1]: Reload requested from client PID 1917 ('systemctl') (unit waagent.service)... Dec 12 18:41:43.862300 systemd[1]: Reloading... Dec 12 18:41:43.943883 zram_generator::config[1956]: No configuration found. Dec 12 18:41:44.132072 systemd[1]: Reloading finished in 269 ms. Dec 12 18:41:44.145276 waagent[1902]: 2025-12-12T18:41:44.144366Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 12 18:41:44.145276 waagent[1902]: 2025-12-12T18:41:44.144511Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 12 18:41:44.178845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#98 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 12 18:41:44.470067 waagent[1902]: 2025-12-12T18:41:44.469948Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 12 18:41:44.470289 waagent[1902]: 2025-12-12T18:41:44.470257Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 12 18:41:44.470930 waagent[1902]: 2025-12-12T18:41:44.470891Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 12 18:41:44.471244 waagent[1902]: 2025-12-12T18:41:44.471213Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 12 18:41:44.471290 waagent[1902]: 2025-12-12T18:41:44.471266Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 18:41:44.471345 waagent[1902]: 2025-12-12T18:41:44.471323Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 18:41:44.471593 waagent[1902]: 2025-12-12T18:41:44.471569Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 12 18:41:44.471681 waagent[1902]: 2025-12-12T18:41:44.471648Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 18:41:44.471954 waagent[1902]: 2025-12-12T18:41:44.471903Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 12 18:41:44.472015 waagent[1902]: 2025-12-12T18:41:44.471957Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 12 18:41:44.472169 waagent[1902]: 2025-12-12T18:41:44.472147Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 18:41:44.472289 waagent[1902]: 2025-12-12T18:41:44.472266Z INFO EnvHandler ExtHandler Configure routes Dec 12 18:41:44.472425 waagent[1902]: 2025-12-12T18:41:44.472312Z INFO EnvHandler ExtHandler Gateway:None Dec 12 18:41:44.472425 waagent[1902]: 2025-12-12T18:41:44.472387Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 12 18:41:44.472609 waagent[1902]: 2025-12-12T18:41:44.472584Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 12 18:41:44.472609 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 12 18:41:44.472609 waagent[1902]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Dec 12 18:41:44.472609 waagent[1902]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 12 18:41:44.472609 waagent[1902]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 12 18:41:44.472609 waagent[1902]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 18:41:44.472609 waagent[1902]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 18:41:44.472754 waagent[1902]: 2025-12-12T18:41:44.472638Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 12 18:41:44.473062 waagent[1902]: 2025-12-12T18:41:44.473029Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 12 18:41:44.473106 waagent[1902]: 2025-12-12T18:41:44.473091Z INFO EnvHandler ExtHandler Routes:None Dec 12 18:41:44.481408 waagent[1902]: 2025-12-12T18:41:44.481363Z INFO ExtHandler ExtHandler Dec 12 18:41:44.481495 waagent[1902]: 2025-12-12T18:41:44.481428Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8df5a4cf-a7fa-4b20-871a-1055c605a230 correlation e08b596f-11c1-41ad-b06e-43b79906f1f2 created: 2025-12-12T18:40:36.847260Z] Dec 12 18:41:44.481721 waagent[1902]: 2025-12-12T18:41:44.481695Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 12 18:41:44.482149 waagent[1902]: 2025-12-12T18:41:44.482126Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 12 18:41:44.513946 waagent[1902]: 2025-12-12T18:41:44.513895Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 12 18:41:44.513946 waagent[1902]: Try `iptables -h' or 'iptables --help' for more information.) Dec 12 18:41:44.514284 waagent[1902]: 2025-12-12T18:41:44.514256Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0B466C35-F782-4414-82E5-B62F99A15615;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 12 18:41:44.540431 waagent[1902]: 2025-12-12T18:41:44.540379Z INFO MonitorHandler ExtHandler Network interfaces: Dec 12 18:41:44.540431 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: Dec 12 18:41:44.540431 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 12 18:41:44.540431 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d4:ea:0b brd ff:ff:ff:ff:ff:ff\ alias Network Device Dec 12 18:41:44.540431 waagent[1902]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d4:ea:0b brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 12 18:41:44.540431 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 12 18:41:44.540431 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 12 18:41:44.540431 waagent[1902]: 2: eth0 inet 10.200.4.12/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 12 18:41:44.540431 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 12 18:41:44.540431 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 12 18:41:44.540431 waagent[1902]: 2: eth0 inet6 fe80::20d:3aff:fed4:ea0b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 12 18:41:44.572280 waagent[1902]: 2025-12-12T18:41:44.572221Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 12 18:41:44.572280 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:41:44.572280 waagent[1902]: pkts bytes target prot opt in out source destination Dec 12 18:41:44.572280 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:41:44.572280 waagent[1902]: pkts bytes target prot opt in out source destination Dec 12 18:41:44.572280 waagent[1902]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Dec 12 18:41:44.572280 waagent[1902]: pkts bytes target prot opt in out source destination Dec 12 18:41:44.572280 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 18:41:44.572280 waagent[1902]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 18:41:44.572280 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 18:41:44.576009 waagent[1902]: 2025-12-12T18:41:44.575963Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 12 18:41:44.576009 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:41:44.576009 waagent[1902]: pkts bytes target prot opt in out source destination Dec 12 18:41:44.576009 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:41:44.576009 waagent[1902]: pkts bytes target prot opt in out source destination Dec 12 18:41:44.576009 waagent[1902]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Dec 12 18:41:44.576009 waagent[1902]: pkts bytes target prot opt in out source destination Dec 12 18:41:44.576009 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 18:41:44.576009 waagent[1902]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 18:41:44.576009 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 18:41:52.338374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:41:52.339843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:41:52.836058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:41:52.844074 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:41:52.884841 kubelet[2054]: E1212 18:41:52.884794 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:41:52.888058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:41:52.888214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:41:52.888566 systemd[1]: kubelet.service: Consumed 144ms CPU time, 108.3M memory peak. Dec 12 18:41:53.100542 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:41:53.101653 systemd[1]: Started sshd@0-10.200.4.12:22-10.200.16.10:45688.service - OpenSSH per-connection server daemon (10.200.16.10:45688). Dec 12 18:41:53.788746 sshd[2062]: Accepted publickey for core from 10.200.16.10 port 45688 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:41:53.789900 sshd-session[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:53.794340 systemd-logind[1691]: New session 3 of user core. Dec 12 18:41:53.800976 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:41:54.310061 systemd[1]: Started sshd@1-10.200.4.12:22-10.200.16.10:45690.service - OpenSSH per-connection server daemon (10.200.16.10:45690). Dec 12 18:41:54.915998 sshd[2068]: Accepted publickey for core from 10.200.16.10 port 45690 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:41:54.917225 sshd-session[2068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:54.921749 systemd-logind[1691]: New session 4 of user core. Dec 12 18:41:54.928987 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:41:55.342254 sshd[2071]: Connection closed by 10.200.16.10 port 45690 Dec 12 18:41:55.342756 sshd-session[2068]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:55.346071 systemd[1]: sshd@1-10.200.4.12:22-10.200.16.10:45690.service: Deactivated successfully. Dec 12 18:41:55.347699 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:41:55.348512 systemd-logind[1691]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:41:55.349663 systemd-logind[1691]: Removed session 4. Dec 12 18:41:55.468508 systemd[1]: Started sshd@2-10.200.4.12:22-10.200.16.10:45694.service - OpenSSH per-connection server daemon (10.200.16.10:45694). Dec 12 18:41:56.068792 sshd[2077]: Accepted publickey for core from 10.200.16.10 port 45694 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:41:56.070105 sshd-session[2077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:56.074823 systemd-logind[1691]: New session 5 of user core. Dec 12 18:41:56.081992 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:41:56.508323 sshd[2080]: Connection closed by 10.200.16.10 port 45694 Dec 12 18:41:56.508963 sshd-session[2077]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:56.512430 systemd[1]: sshd@2-10.200.4.12:22-10.200.16.10:45694.service: Deactivated successfully. Dec 12 18:41:56.514099 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:41:56.514776 systemd-logind[1691]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:41:56.516372 systemd-logind[1691]: Removed session 5. Dec 12 18:41:56.618449 systemd[1]: Started sshd@3-10.200.4.12:22-10.200.16.10:45698.service - OpenSSH per-connection server daemon (10.200.16.10:45698). Dec 12 18:41:57.215783 sshd[2086]: Accepted publickey for core from 10.200.16.10 port 45698 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:41:57.217026 sshd-session[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:57.221744 systemd-logind[1691]: New session 6 of user core. Dec 12 18:41:57.230978 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:41:57.644794 sshd[2089]: Connection closed by 10.200.16.10 port 45698 Dec 12 18:41:57.645382 sshd-session[2086]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:57.648848 systemd[1]: sshd@3-10.200.4.12:22-10.200.16.10:45698.service: Deactivated successfully. Dec 12 18:41:57.650365 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:41:57.651131 systemd-logind[1691]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:41:57.652380 systemd-logind[1691]: Removed session 6. Dec 12 18:41:57.750904 systemd[1]: Started sshd@4-10.200.4.12:22-10.200.16.10:45702.service - OpenSSH per-connection server daemon (10.200.16.10:45702). Dec 12 18:41:58.352196 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 45702 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:41:58.353396 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:58.357744 systemd-logind[1691]: New session 7 of user core. Dec 12 18:41:58.363980 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:41:58.768579 sudo[2099]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:41:58.768848 sudo[2099]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:41:58.794872 sudo[2099]: pam_unix(sudo:session): session closed for user root Dec 12 18:41:58.894209 sshd[2098]: Connection closed by 10.200.16.10 port 45702 Dec 12 18:41:58.894879 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:58.898455 systemd[1]: sshd@4-10.200.4.12:22-10.200.16.10:45702.service: Deactivated successfully. Dec 12 18:41:58.900037 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:41:58.900702 systemd-logind[1691]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:41:58.901986 systemd-logind[1691]: Removed session 7. Dec 12 18:41:58.998598 systemd[1]: Started sshd@5-10.200.4.12:22-10.200.16.10:45704.service - OpenSSH per-connection server daemon (10.200.16.10:45704). Dec 12 18:41:59.605202 sshd[2105]: Accepted publickey for core from 10.200.16.10 port 45704 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:41:59.606455 sshd-session[2105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:59.611132 systemd-logind[1691]: New session 8 of user core. Dec 12 18:41:59.614001 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:41:59.933452 sudo[2110]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:41:59.933684 sudo[2110]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:41:59.940303 sudo[2110]: pam_unix(sudo:session): session closed for user root Dec 12 18:41:59.944472 sudo[2109]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:41:59.944720 sudo[2109]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:41:59.953364 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:41:59.983778 augenrules[2132]: No rules Dec 12 18:41:59.984290 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:41:59.984447 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:41:59.985487 sudo[2109]: pam_unix(sudo:session): session closed for user root Dec 12 18:42:00.085794 sshd[2108]: Connection closed by 10.200.16.10 port 45704 Dec 12 18:42:00.086239 sshd-session[2105]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:00.089420 systemd[1]: sshd@5-10.200.4.12:22-10.200.16.10:45704.service: Deactivated successfully. Dec 12 18:42:00.090982 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:42:00.091722 systemd-logind[1691]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:42:00.092771 systemd-logind[1691]: Removed session 8. Dec 12 18:42:00.189801 systemd[1]: Started sshd@6-10.200.4.12:22-10.200.16.10:52870.service - OpenSSH per-connection server daemon (10.200.16.10:52870). Dec 12 18:42:00.787478 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 52870 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:42:00.788620 sshd-session[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:00.792920 systemd-logind[1691]: New session 9 of user core. Dec 12 18:42:00.799963 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:42:01.116551 sudo[2145]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:42:01.116789 sudo[2145]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:42:02.508782 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:42:02.518107 (dockerd)[2163]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:42:03.088300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:42:03.089903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:03.554146 chronyd[1669]: Selected source PHC0 Dec 12 18:42:03.674132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:03.686101 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:42:03.720996 kubelet[2175]: E1212 18:42:03.720942 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:42:03.723896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:42:03.724058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:42:03.724389 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108.3M memory peak. Dec 12 18:42:04.006766 dockerd[2163]: time="2025-12-12T18:42:04.006690572Z" level=info msg="Starting up" Dec 12 18:42:04.008027 dockerd[2163]: time="2025-12-12T18:42:04.007997291Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:42:04.017611 dockerd[2163]: time="2025-12-12T18:42:04.017572535Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:42:04.052435 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport67284726-merged.mount: Deactivated successfully. Dec 12 18:42:04.178180 dockerd[2163]: time="2025-12-12T18:42:04.178136259Z" level=info msg="Loading containers: start." Dec 12 18:42:04.215855 kernel: Initializing XFRM netlink socket Dec 12 18:42:04.523707 systemd-networkd[1344]: docker0: Link UP Dec 12 18:42:04.544149 dockerd[2163]: time="2025-12-12T18:42:04.544112108Z" level=info msg="Loading containers: done." Dec 12 18:42:04.555739 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3769949388-merged.mount: Deactivated successfully. Dec 12 18:42:04.570081 dockerd[2163]: time="2025-12-12T18:42:04.570049436Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:42:04.570218 dockerd[2163]: time="2025-12-12T18:42:04.570128064Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:42:04.570218 dockerd[2163]: time="2025-12-12T18:42:04.570205085Z" level=info msg="Initializing buildkit" Dec 12 18:42:04.630600 dockerd[2163]: time="2025-12-12T18:42:04.630560827Z" level=info msg="Completed buildkit initialization" Dec 12 18:42:04.638295 dockerd[2163]: time="2025-12-12T18:42:04.638256685Z" level=info msg="Daemon has completed initialization" Dec 12 18:42:04.638727 dockerd[2163]: time="2025-12-12T18:42:04.638399201Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:42:04.638493 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:42:05.656227 containerd[1725]: time="2025-12-12T18:42:05.656177341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:42:06.407374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151045388.mount: Deactivated successfully. Dec 12 18:42:07.739631 containerd[1725]: time="2025-12-12T18:42:07.739575832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:07.742423 containerd[1725]: time="2025-12-12T18:42:07.742382700Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072191" Dec 12 18:42:07.745985 containerd[1725]: time="2025-12-12T18:42:07.745922943Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:07.752513 containerd[1725]: time="2025-12-12T18:42:07.751458730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:07.752513 containerd[1725]: time="2025-12-12T18:42:07.752260772Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.096036135s" Dec 12 18:42:07.752513 containerd[1725]: time="2025-12-12T18:42:07.752295494Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:42:07.753131 containerd[1725]: time="2025-12-12T18:42:07.753107942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:42:09.011770 containerd[1725]: time="2025-12-12T18:42:09.011723846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:09.014913 containerd[1725]: time="2025-12-12T18:42:09.014695241Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992018" Dec 12 18:42:09.018315 containerd[1725]: time="2025-12-12T18:42:09.018289332Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:09.025008 containerd[1725]: time="2025-12-12T18:42:09.024968876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:09.025639 containerd[1725]: time="2025-12-12T18:42:09.025616738Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.272478293s" Dec 12 18:42:09.025725 containerd[1725]: time="2025-12-12T18:42:09.025711648Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:42:09.026261 containerd[1725]: time="2025-12-12T18:42:09.026220298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:42:10.118132 containerd[1725]: time="2025-12-12T18:42:10.118081223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:10.120783 containerd[1725]: time="2025-12-12T18:42:10.120751401Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404256" Dec 12 18:42:10.124230 containerd[1725]: time="2025-12-12T18:42:10.124191362Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:10.128377 containerd[1725]: time="2025-12-12T18:42:10.128330368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:10.129265 containerd[1725]: time="2025-12-12T18:42:10.129078622Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.102826592s" Dec 12 18:42:10.129265 containerd[1725]: time="2025-12-12T18:42:10.129115756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:42:10.129911 containerd[1725]: time="2025-12-12T18:42:10.129887122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:42:13.838298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 18:42:13.839896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:14.295154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:14.298562 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:42:14.336782 kubelet[2458]: E1212 18:42:14.336749 2458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:42:14.338423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:42:14.338534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:42:14.338999 systemd[1]: kubelet.service: Consumed 138ms CPU time, 110.3M memory peak. Dec 12 18:42:18.023133 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 12 18:42:18.729959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373720448.mount: Deactivated successfully. Dec 12 18:42:19.131875 containerd[1725]: time="2025-12-12T18:42:19.131811227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:19.134609 containerd[1725]: time="2025-12-12T18:42:19.134582442Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161431" Dec 12 18:42:19.137946 containerd[1725]: time="2025-12-12T18:42:19.137895336Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:19.141639 containerd[1725]: time="2025-12-12T18:42:19.141597472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:19.142132 containerd[1725]: time="2025-12-12T18:42:19.141994691Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 9.012078222s" Dec 12 18:42:19.142132 containerd[1725]: time="2025-12-12T18:42:19.142030847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:42:19.142592 containerd[1725]: time="2025-12-12T18:42:19.142574367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:42:19.709058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859074570.mount: Deactivated successfully. Dec 12 18:42:20.797924 containerd[1725]: time="2025-12-12T18:42:20.797873793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:20.800852 containerd[1725]: time="2025-12-12T18:42:20.800801028Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Dec 12 18:42:20.804148 containerd[1725]: time="2025-12-12T18:42:20.804085568Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:20.808185 containerd[1725]: time="2025-12-12T18:42:20.808131160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:20.809027 containerd[1725]: time="2025-12-12T18:42:20.808848303Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.666229904s" Dec 12 18:42:20.809027 containerd[1725]: time="2025-12-12T18:42:20.808883952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:42:20.809776 containerd[1725]: time="2025-12-12T18:42:20.809745510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:42:21.354145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009232037.mount: Deactivated successfully. Dec 12 18:42:21.375804 containerd[1725]: time="2025-12-12T18:42:21.375741769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:42:21.378581 containerd[1725]: time="2025-12-12T18:42:21.378436578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Dec 12 18:42:21.381912 containerd[1725]: time="2025-12-12T18:42:21.381883957Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:42:21.391286 containerd[1725]: time="2025-12-12T18:42:21.390797035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:42:21.391547 containerd[1725]: time="2025-12-12T18:42:21.391513182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 581.738879ms" Dec 12 18:42:21.391605 containerd[1725]: time="2025-12-12T18:42:21.391546156Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:42:21.392207 containerd[1725]: time="2025-12-12T18:42:21.392182585Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:42:22.223965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138142651.mount: Deactivated successfully. Dec 12 18:42:24.026277 containerd[1725]: time="2025-12-12T18:42:24.026226120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:24.030406 containerd[1725]: time="2025-12-12T18:42:24.030258221Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Dec 12 18:42:24.036153 containerd[1725]: time="2025-12-12T18:42:24.036123672Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:24.047171 containerd[1725]: time="2025-12-12T18:42:24.047139913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:24.048035 containerd[1725]: time="2025-12-12T18:42:24.047894480Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.655682707s" Dec 12 18:42:24.048035 containerd[1725]: time="2025-12-12T18:42:24.047928570Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:42:24.588380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 12 18:42:24.590358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:24.632565 update_engine[1692]: I20251212 18:42:24.632015 1692 update_attempter.cc:509] Updating boot flags... Dec 12 18:42:25.118984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:25.126129 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:42:25.171753 kubelet[2646]: E1212 18:42:25.171715 2646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:42:25.173458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:42:25.173595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:42:25.173978 systemd[1]: kubelet.service: Consumed 156ms CPU time, 110.1M memory peak. Dec 12 18:42:26.536433 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:26.536585 systemd[1]: kubelet.service: Consumed 156ms CPU time, 110.1M memory peak. Dec 12 18:42:26.538952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:26.567892 systemd[1]: Reload requested from client PID 2660 ('systemctl') (unit session-9.scope)... Dec 12 18:42:26.567906 systemd[1]: Reloading... Dec 12 18:42:26.696858 zram_generator::config[2709]: No configuration found. Dec 12 18:42:26.943528 systemd[1]: Reloading finished in 375 ms. Dec 12 18:42:26.986206 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:42:26.986393 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:42:26.986730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:26.986767 systemd[1]: kubelet.service: Consumed 108ms CPU time, 98.3M memory peak. Dec 12 18:42:26.988024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:27.812229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:27.822148 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:42:27.861382 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:42:27.861382 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:42:27.861382 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:42:27.861682 kubelet[2777]: I1212 18:42:27.861435 2777 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:42:28.303563 kubelet[2777]: I1212 18:42:28.303520 2777 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:42:28.303563 kubelet[2777]: I1212 18:42:28.303549 2777 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:42:28.303849 kubelet[2777]: I1212 18:42:28.303816 2777 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:42:28.336657 kubelet[2777]: E1212 18:42:28.336618 2777 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:42:28.337754 kubelet[2777]: I1212 18:42:28.337591 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:42:28.347847 kubelet[2777]: I1212 18:42:28.346275 2777 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:42:28.348775 kubelet[2777]: I1212 18:42:28.348744 2777 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:42:28.348964 kubelet[2777]: I1212 18:42:28.348929 2777 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:42:28.349108 kubelet[2777]: I1212 18:42:28.348959 2777 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-7f4437f45c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:42:28.349223 kubelet[2777]: I1212 18:42:28.349117 2777 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:42:28.349223 kubelet[2777]: I1212 18:42:28.349127 2777 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:42:28.349273 kubelet[2777]: I1212 18:42:28.349235 2777 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:42:28.352529 kubelet[2777]: I1212 18:42:28.352513 2777 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:42:28.354663 kubelet[2777]: I1212 18:42:28.354395 2777 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:42:28.354663 kubelet[2777]: I1212 18:42:28.354429 2777 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:42:28.354663 kubelet[2777]: I1212 18:42:28.354443 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:42:28.358743 kubelet[2777]: W1212 18:42:28.358705 2777 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-7f4437f45c&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Dec 12 18:42:28.359120 kubelet[2777]: E1212 18:42:28.359087 2777 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-7f4437f45c&limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:42:28.359233 kubelet[2777]: I1212 18:42:28.359219 2777 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:42:28.359637 kubelet[2777]: I1212 18:42:28.359623 2777 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:42:28.361658 kubelet[2777]: W1212 18:42:28.360352 2777 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:42:28.363492 kubelet[2777]: W1212 18:42:28.363399 2777 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Dec 12 18:42:28.363492 kubelet[2777]: E1212 18:42:28.363468 2777 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:42:28.366141 kubelet[2777]: I1212 18:42:28.366114 2777 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:42:28.366212 kubelet[2777]: I1212 18:42:28.366159 2777 server.go:1287] "Started kubelet" Dec 12 18:42:28.369468 kubelet[2777]: I1212 18:42:28.369423 2777 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:42:28.370822 kubelet[2777]: I1212 18:42:28.370395 2777 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:42:28.371365 kubelet[2777]: I1212 18:42:28.371313 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:42:28.372319 kubelet[2777]: I1212 18:42:28.371564 2777 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:42:28.372913 kubelet[2777]: I1212 18:42:28.372893 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:42:28.375211 kubelet[2777]: I1212 18:42:28.374197 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:42:28.375211 kubelet[2777]: E1212 18:42:28.373644 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-7f4437f45c.18808bf7573e96ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-7f4437f45c,UID:ci-4459.2.2-a-7f4437f45c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-7f4437f45c,},FirstTimestamp:2025-12-12 18:42:28.366137004 +0000 UTC m=+0.539391975,LastTimestamp:2025-12-12 18:42:28.366137004 +0000 UTC m=+0.539391975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-7f4437f45c,}" Dec 12 18:42:28.375920 kubelet[2777]: I1212 18:42:28.375909 2777 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:42:28.376232 kubelet[2777]: E1212 18:42:28.376220 2777 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" Dec 12 18:42:28.378293 kubelet[2777]: E1212 18:42:28.378259 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-7f4437f45c?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="200ms" Dec 12 18:42:28.378702 kubelet[2777]: I1212 18:42:28.378689 2777 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:42:28.378879 kubelet[2777]: I1212 18:42:28.378864 2777 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:42:28.380460 kubelet[2777]: I1212 18:42:28.380445 2777 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:42:28.381996 kubelet[2777]: I1212 18:42:28.381978 2777 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:42:28.382067 kubelet[2777]: I1212 18:42:28.382034 2777 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:42:28.394943 kubelet[2777]: I1212 18:42:28.394903 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:42:28.395891 kubelet[2777]: I1212 18:42:28.395865 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:42:28.395891 kubelet[2777]: I1212 18:42:28.395888 2777 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:42:28.395980 kubelet[2777]: I1212 18:42:28.395904 2777 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:42:28.395980 kubelet[2777]: I1212 18:42:28.395910 2777 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:42:28.395980 kubelet[2777]: E1212 18:42:28.395951 2777 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:42:28.401853 kubelet[2777]: W1212 18:42:28.401774 2777 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Dec 12 18:42:28.401940 kubelet[2777]: E1212 18:42:28.401855 2777 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:42:28.401973 kubelet[2777]: W1212 18:42:28.401940 2777 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Dec 12 18:42:28.402001 kubelet[2777]: E1212 18:42:28.401971 2777 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:42:28.404339 kubelet[2777]: E1212 18:42:28.404255 2777 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:42:28.409551 kubelet[2777]: I1212 18:42:28.409530 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:42:28.409551 kubelet[2777]: I1212 18:42:28.409544 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:42:28.409551 kubelet[2777]: I1212 18:42:28.409559 2777 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:42:28.477063 kubelet[2777]: E1212 18:42:28.477017 2777 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" Dec 12 18:42:28.486552 kubelet[2777]: I1212 18:42:28.486535 2777 policy_none.go:49] "None policy: Start" Dec 12 18:42:28.486552 kubelet[2777]: I1212 18:42:28.486554 2777 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:42:28.486647 kubelet[2777]: I1212 18:42:28.486564 2777 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:42:28.496612 kubelet[2777]: E1212 18:42:28.496593 2777 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:42:28.496966 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:42:28.514203 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:42:28.517273 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:42:28.529379 kubelet[2777]: I1212 18:42:28.529355 2777 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:42:28.529521 kubelet[2777]: I1212 18:42:28.529510 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:42:28.529890 kubelet[2777]: I1212 18:42:28.529525 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:42:28.529890 kubelet[2777]: I1212 18:42:28.529723 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:42:28.531609 kubelet[2777]: E1212 18:42:28.531588 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:42:28.531677 kubelet[2777]: E1212 18:42:28.531631 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-7f4437f45c\" not found" Dec 12 18:42:28.579287 kubelet[2777]: E1212 18:42:28.579212 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-7f4437f45c?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="400ms" Dec 12 18:42:28.631816 kubelet[2777]: I1212 18:42:28.631782 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.632097 kubelet[2777]: E1212 18:42:28.632059 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.705935 systemd[1]: Created slice kubepods-burstable-podcf959182df800fac8ad571198237f173.slice - libcontainer container kubepods-burstable-podcf959182df800fac8ad571198237f173.slice. Dec 12 18:42:28.716497 kubelet[2777]: E1212 18:42:28.716477 2777 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.719685 systemd[1]: Created slice kubepods-burstable-pod595f2956c2173931fec96946556991e3.slice - libcontainer container kubepods-burstable-pod595f2956c2173931fec96946556991e3.slice. Dec 12 18:42:28.721671 kubelet[2777]: E1212 18:42:28.721652 2777 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.724064 systemd[1]: Created slice kubepods-burstable-pod07d972689a96bd08e4a6cf61afec5609.slice - libcontainer container kubepods-burstable-pod07d972689a96bd08e4a6cf61afec5609.slice. Dec 12 18:42:28.725604 kubelet[2777]: E1212 18:42:28.725583 2777 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.834291 kubelet[2777]: I1212 18:42:28.834210 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.834875 kubelet[2777]: E1212 18:42:28.834817 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883187 kubelet[2777]: I1212 18:42:28.883140 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883187 kubelet[2777]: I1212 18:42:28.883192 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07d972689a96bd08e4a6cf61afec5609-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-7f4437f45c\" (UID: \"07d972689a96bd08e4a6cf61afec5609\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883633 kubelet[2777]: I1212 18:42:28.883243 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf959182df800fac8ad571198237f173-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" (UID: \"cf959182df800fac8ad571198237f173\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883633 kubelet[2777]: I1212 18:42:28.883275 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf959182df800fac8ad571198237f173-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" (UID: \"cf959182df800fac8ad571198237f173\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883633 kubelet[2777]: I1212 18:42:28.883297 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883633 kubelet[2777]: I1212 18:42:28.883317 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883633 kubelet[2777]: I1212 18:42:28.883336 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883738 kubelet[2777]: I1212 18:42:28.883355 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf959182df800fac8ad571198237f173-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" (UID: \"cf959182df800fac8ad571198237f173\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.883738 kubelet[2777]: I1212 18:42:28.883375 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:28.979948 kubelet[2777]: E1212 18:42:28.979910 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-7f4437f45c?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="800ms" Dec 12 18:42:29.017429 containerd[1725]: time="2025-12-12T18:42:29.017376992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-7f4437f45c,Uid:cf959182df800fac8ad571198237f173,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:29.022622 containerd[1725]: time="2025-12-12T18:42:29.022594318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-7f4437f45c,Uid:595f2956c2173931fec96946556991e3,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:29.026349 containerd[1725]: time="2025-12-12T18:42:29.026318821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-7f4437f45c,Uid:07d972689a96bd08e4a6cf61afec5609,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:29.105852 containerd[1725]: time="2025-12-12T18:42:29.105439224Z" level=info msg="connecting to shim 918a65ce48e668dd4eec31807a0ecf1044749c10f6399b72e6bce729b8e4c30b" address="unix:///run/containerd/s/2b5df707c53bd7ab5811500f794f3a50e1c7815c7c8257b212c68b3c1ea8e0bf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:29.139377 containerd[1725]: time="2025-12-12T18:42:29.139344517Z" level=info msg="connecting to shim 24ced2b7f2db214ad2d599cb3d9069dde96e5aa8a86c3b1fbd07574ab1439f86" address="unix:///run/containerd/s/bff02a7c9bacafddc3fe73a32060eae01223e84be758e596339e6f7a0c2c4a81" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:29.146228 systemd[1]: Started cri-containerd-918a65ce48e668dd4eec31807a0ecf1044749c10f6399b72e6bce729b8e4c30b.scope - libcontainer container 918a65ce48e668dd4eec31807a0ecf1044749c10f6399b72e6bce729b8e4c30b. Dec 12 18:42:29.150794 containerd[1725]: time="2025-12-12T18:42:29.150768686Z" level=info msg="connecting to shim e629e7dd25ca1fa298d3a446ae236bb92c4a64de39c699b10be1f4315835a093" address="unix:///run/containerd/s/eb8004db64b8c1d18d9c26b273be30a74641e05868e3ebe38c01ce0b44e84f27" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:29.197044 systemd[1]: Started cri-containerd-24ced2b7f2db214ad2d599cb3d9069dde96e5aa8a86c3b1fbd07574ab1439f86.scope - libcontainer container 24ced2b7f2db214ad2d599cb3d9069dde96e5aa8a86c3b1fbd07574ab1439f86. Dec 12 18:42:29.201066 systemd[1]: Started cri-containerd-e629e7dd25ca1fa298d3a446ae236bb92c4a64de39c699b10be1f4315835a093.scope - libcontainer container e629e7dd25ca1fa298d3a446ae236bb92c4a64de39c699b10be1f4315835a093. Dec 12 18:42:29.220897 containerd[1725]: time="2025-12-12T18:42:29.220869227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-7f4437f45c,Uid:cf959182df800fac8ad571198237f173,Namespace:kube-system,Attempt:0,} returns sandbox id \"918a65ce48e668dd4eec31807a0ecf1044749c10f6399b72e6bce729b8e4c30b\"" Dec 12 18:42:29.228385 containerd[1725]: time="2025-12-12T18:42:29.228360534Z" level=info msg="CreateContainer within sandbox \"918a65ce48e668dd4eec31807a0ecf1044749c10f6399b72e6bce729b8e4c30b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:42:29.238924 kubelet[2777]: I1212 18:42:29.238898 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:29.240745 kubelet[2777]: E1212 18:42:29.240716 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:29.253531 containerd[1725]: time="2025-12-12T18:42:29.252596178Z" level=info msg="Container 07ada710d57a32d9a6ee18e98fbb7ed3b9da365fc4a22e05d738c7b5a10c2d3f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:29.274940 containerd[1725]: time="2025-12-12T18:42:29.274907080Z" level=info msg="CreateContainer within sandbox \"918a65ce48e668dd4eec31807a0ecf1044749c10f6399b72e6bce729b8e4c30b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07ada710d57a32d9a6ee18e98fbb7ed3b9da365fc4a22e05d738c7b5a10c2d3f\"" Dec 12 18:42:29.276944 containerd[1725]: time="2025-12-12T18:42:29.276920719Z" level=info msg="StartContainer for \"07ada710d57a32d9a6ee18e98fbb7ed3b9da365fc4a22e05d738c7b5a10c2d3f\"" Dec 12 18:42:29.278104 containerd[1725]: time="2025-12-12T18:42:29.278068499Z" level=info msg="connecting to shim 07ada710d57a32d9a6ee18e98fbb7ed3b9da365fc4a22e05d738c7b5a10c2d3f" address="unix:///run/containerd/s/2b5df707c53bd7ab5811500f794f3a50e1c7815c7c8257b212c68b3c1ea8e0bf" protocol=ttrpc version=3 Dec 12 18:42:29.281642 containerd[1725]: time="2025-12-12T18:42:29.281606814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-7f4437f45c,Uid:07d972689a96bd08e4a6cf61afec5609,Namespace:kube-system,Attempt:0,} returns sandbox id \"e629e7dd25ca1fa298d3a446ae236bb92c4a64de39c699b10be1f4315835a093\"" Dec 12 18:42:29.284735 containerd[1725]: time="2025-12-12T18:42:29.284704432Z" level=info msg="CreateContainer within sandbox \"e629e7dd25ca1fa298d3a446ae236bb92c4a64de39c699b10be1f4315835a093\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:42:29.288092 containerd[1725]: time="2025-12-12T18:42:29.288061493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-7f4437f45c,Uid:595f2956c2173931fec96946556991e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"24ced2b7f2db214ad2d599cb3d9069dde96e5aa8a86c3b1fbd07574ab1439f86\"" Dec 12 18:42:29.294854 containerd[1725]: time="2025-12-12T18:42:29.294376896Z" level=info msg="CreateContainer within sandbox \"24ced2b7f2db214ad2d599cb3d9069dde96e5aa8a86c3b1fbd07574ab1439f86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:42:29.306967 systemd[1]: Started cri-containerd-07ada710d57a32d9a6ee18e98fbb7ed3b9da365fc4a22e05d738c7b5a10c2d3f.scope - libcontainer container 07ada710d57a32d9a6ee18e98fbb7ed3b9da365fc4a22e05d738c7b5a10c2d3f. Dec 12 18:42:29.318737 containerd[1725]: time="2025-12-12T18:42:29.318701620Z" level=info msg="Container 7a38f5a3566afbb89c4a8037e0083eac32bfcae4fad9cf5440af4955b5d904a9: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:29.329430 containerd[1725]: time="2025-12-12T18:42:29.329401514Z" level=info msg="Container c4e99c3de6dc0ad12ceac703b9c842bf26ff0398c5a9d8be0a8c08af77263f3d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:29.343861 containerd[1725]: time="2025-12-12T18:42:29.343214986Z" level=info msg="CreateContainer within sandbox \"e629e7dd25ca1fa298d3a446ae236bb92c4a64de39c699b10be1f4315835a093\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a38f5a3566afbb89c4a8037e0083eac32bfcae4fad9cf5440af4955b5d904a9\"" Dec 12 18:42:29.344991 containerd[1725]: time="2025-12-12T18:42:29.344961188Z" level=info msg="StartContainer for \"7a38f5a3566afbb89c4a8037e0083eac32bfcae4fad9cf5440af4955b5d904a9\"" Dec 12 18:42:29.346239 containerd[1725]: time="2025-12-12T18:42:29.346195624Z" level=info msg="connecting to shim 7a38f5a3566afbb89c4a8037e0083eac32bfcae4fad9cf5440af4955b5d904a9" address="unix:///run/containerd/s/eb8004db64b8c1d18d9c26b273be30a74641e05868e3ebe38c01ce0b44e84f27" protocol=ttrpc version=3 Dec 12 18:42:29.362554 containerd[1725]: time="2025-12-12T18:42:29.361796223Z" level=info msg="CreateContainer within sandbox \"24ced2b7f2db214ad2d599cb3d9069dde96e5aa8a86c3b1fbd07574ab1439f86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c4e99c3de6dc0ad12ceac703b9c842bf26ff0398c5a9d8be0a8c08af77263f3d\"" Dec 12 18:42:29.363195 containerd[1725]: time="2025-12-12T18:42:29.362955817Z" level=info msg="StartContainer for \"07ada710d57a32d9a6ee18e98fbb7ed3b9da365fc4a22e05d738c7b5a10c2d3f\" returns successfully" Dec 12 18:42:29.365955 containerd[1725]: time="2025-12-12T18:42:29.365921353Z" level=info msg="StartContainer for \"c4e99c3de6dc0ad12ceac703b9c842bf26ff0398c5a9d8be0a8c08af77263f3d\"" Dec 12 18:42:29.367657 containerd[1725]: time="2025-12-12T18:42:29.367602319Z" level=info msg="connecting to shim c4e99c3de6dc0ad12ceac703b9c842bf26ff0398c5a9d8be0a8c08af77263f3d" address="unix:///run/containerd/s/bff02a7c9bacafddc3fe73a32060eae01223e84be758e596339e6f7a0c2c4a81" protocol=ttrpc version=3 Dec 12 18:42:29.375996 systemd[1]: Started cri-containerd-7a38f5a3566afbb89c4a8037e0083eac32bfcae4fad9cf5440af4955b5d904a9.scope - libcontainer container 7a38f5a3566afbb89c4a8037e0083eac32bfcae4fad9cf5440af4955b5d904a9. Dec 12 18:42:29.393151 systemd[1]: Started cri-containerd-c4e99c3de6dc0ad12ceac703b9c842bf26ff0398c5a9d8be0a8c08af77263f3d.scope - libcontainer container c4e99c3de6dc0ad12ceac703b9c842bf26ff0398c5a9d8be0a8c08af77263f3d. Dec 12 18:42:29.409371 kubelet[2777]: W1212 18:42:29.408815 2777 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Dec 12 18:42:29.410139 kubelet[2777]: E1212 18:42:29.409921 2777 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:42:29.418504 kubelet[2777]: W1212 18:42:29.418379 2777 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Dec 12 18:42:29.418504 kubelet[2777]: E1212 18:42:29.418457 2777 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:42:29.421996 kubelet[2777]: E1212 18:42:29.421978 2777 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:29.489491 containerd[1725]: time="2025-12-12T18:42:29.489376308Z" level=info msg="StartContainer for \"7a38f5a3566afbb89c4a8037e0083eac32bfcae4fad9cf5440af4955b5d904a9\" returns successfully" Dec 12 18:42:29.495464 containerd[1725]: time="2025-12-12T18:42:29.495427878Z" level=info msg="StartContainer for \"c4e99c3de6dc0ad12ceac703b9c842bf26ff0398c5a9d8be0a8c08af77263f3d\" returns successfully" Dec 12 18:42:30.043403 kubelet[2777]: I1212 18:42:30.043251 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:30.428476 kubelet[2777]: E1212 18:42:30.428446 2777 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:30.432299 kubelet[2777]: E1212 18:42:30.432141 2777 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:30.432757 kubelet[2777]: E1212 18:42:30.432614 2777 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.264991 kubelet[2777]: E1212 18:42:31.264928 2777 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-7f4437f45c\" not found" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.365004 kubelet[2777]: I1212 18:42:31.364962 2777 apiserver.go:52] "Watching apiserver" Dec 12 18:42:31.382508 kubelet[2777]: I1212 18:42:31.382471 2777 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:42:31.410126 kubelet[2777]: I1212 18:42:31.410058 2777 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.434863 kubelet[2777]: I1212 18:42:31.432474 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.434863 kubelet[2777]: I1212 18:42:31.432860 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.434863 kubelet[2777]: I1212 18:42:31.433124 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.448042 kubelet[2777]: E1212 18:42:31.448010 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.448218 kubelet[2777]: E1212 18:42:31.448205 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.448644 kubelet[2777]: E1212 18:42:31.448619 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-7f4437f45c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.476992 kubelet[2777]: I1212 18:42:31.476964 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.478406 kubelet[2777]: E1212 18:42:31.478375 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.478406 kubelet[2777]: I1212 18:42:31.478404 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.479969 kubelet[2777]: E1212 18:42:31.479938 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.479969 kubelet[2777]: I1212 18:42:31.479964 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:31.481182 kubelet[2777]: E1212 18:42:31.481148 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-7f4437f45c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:32.434846 kubelet[2777]: I1212 18:42:32.434752 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:32.435657 kubelet[2777]: I1212 18:42:32.434815 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:32.443795 kubelet[2777]: W1212 18:42:32.443754 2777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:42:32.444318 kubelet[2777]: W1212 18:42:32.444288 2777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:42:33.266070 systemd[1]: Reload requested from client PID 3043 ('systemctl') (unit session-9.scope)... Dec 12 18:42:33.266087 systemd[1]: Reloading... Dec 12 18:42:33.361858 zram_generator::config[3086]: No configuration found. Dec 12 18:42:33.569130 systemd[1]: Reloading finished in 302 ms. Dec 12 18:42:33.607123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:33.624658 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:42:33.624933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:33.624984 systemd[1]: kubelet.service: Consumed 886ms CPU time, 129M memory peak. Dec 12 18:42:33.626560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:34.198161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:34.202384 (kubelet)[3157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:42:34.257194 kubelet[3157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:42:34.257194 kubelet[3157]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:42:34.257194 kubelet[3157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:42:34.257714 kubelet[3157]: I1212 18:42:34.257616 3157 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:42:34.265664 kubelet[3157]: I1212 18:42:34.265642 3157 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:42:34.266847 kubelet[3157]: I1212 18:42:34.265751 3157 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:42:34.266847 kubelet[3157]: I1212 18:42:34.265985 3157 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:42:34.267622 kubelet[3157]: I1212 18:42:34.267601 3157 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:42:34.270038 kubelet[3157]: I1212 18:42:34.269692 3157 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:42:34.274712 kubelet[3157]: I1212 18:42:34.274691 3157 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:42:34.277137 kubelet[3157]: I1212 18:42:34.277118 3157 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:42:34.277364 kubelet[3157]: I1212 18:42:34.277347 3157 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:42:34.277681 kubelet[3157]: I1212 18:42:34.277395 3157 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-7f4437f45c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:42:34.277796 kubelet[3157]: I1212 18:42:34.277790 3157 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:42:34.277849 kubelet[3157]: I1212 18:42:34.277822 3157 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:42:34.277926 kubelet[3157]: I1212 18:42:34.277922 3157 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:42:34.279021 kubelet[3157]: I1212 18:42:34.279006 3157 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:42:34.279154 kubelet[3157]: I1212 18:42:34.279145 3157 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:42:34.279215 kubelet[3157]: I1212 18:42:34.279210 3157 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:42:34.279258 kubelet[3157]: I1212 18:42:34.279252 3157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:42:34.286928 kubelet[3157]: I1212 18:42:34.286905 3157 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:42:34.287455 kubelet[3157]: I1212 18:42:34.287438 3157 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:42:34.288015 kubelet[3157]: I1212 18:42:34.287998 3157 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:42:34.288126 kubelet[3157]: I1212 18:42:34.288118 3157 server.go:1287] "Started kubelet" Dec 12 18:42:34.290351 kubelet[3157]: I1212 18:42:34.290329 3157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:42:34.298481 kubelet[3157]: I1212 18:42:34.298453 3157 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:42:34.299918 kubelet[3157]: I1212 18:42:34.299452 3157 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:42:34.300507 kubelet[3157]: I1212 18:42:34.300466 3157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:42:34.300713 kubelet[3157]: I1212 18:42:34.300704 3157 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:42:34.300950 kubelet[3157]: I1212 18:42:34.300942 3157 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:42:34.302266 kubelet[3157]: I1212 18:42:34.302258 3157 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:42:34.302483 kubelet[3157]: E1212 18:42:34.302474 3157 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-7f4437f45c\" not found" Dec 12 18:42:34.304202 kubelet[3157]: I1212 18:42:34.304185 3157 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:42:34.304413 kubelet[3157]: I1212 18:42:34.304404 3157 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:42:34.306438 kubelet[3157]: I1212 18:42:34.306405 3157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:42:34.307685 kubelet[3157]: I1212 18:42:34.307664 3157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:42:34.307888 kubelet[3157]: I1212 18:42:34.307781 3157 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:42:34.307888 kubelet[3157]: I1212 18:42:34.307802 3157 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:42:34.307888 kubelet[3157]: I1212 18:42:34.307810 3157 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:42:34.308045 kubelet[3157]: E1212 18:42:34.308028 3157 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:42:34.321126 kubelet[3157]: I1212 18:42:34.321104 3157 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:42:34.321204 kubelet[3157]: I1212 18:42:34.321189 3157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:42:34.322292 sudo[3182]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 18:42:34.322923 sudo[3182]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 18:42:34.325887 kubelet[3157]: I1212 18:42:34.325177 3157 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:42:34.326496 kubelet[3157]: E1212 18:42:34.326282 3157 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:42:34.387536 kubelet[3157]: I1212 18:42:34.387516 3157 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:42:34.387536 kubelet[3157]: I1212 18:42:34.387533 3157 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:42:34.387641 kubelet[3157]: I1212 18:42:34.387548 3157 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:42:34.387981 kubelet[3157]: I1212 18:42:34.387678 3157 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:42:34.387981 kubelet[3157]: I1212 18:42:34.387688 3157 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:42:34.387981 kubelet[3157]: I1212 18:42:34.387704 3157 policy_none.go:49] "None policy: Start" Dec 12 18:42:34.387981 kubelet[3157]: I1212 18:42:34.387713 3157 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:42:34.387981 kubelet[3157]: I1212 18:42:34.387721 3157 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:42:34.387981 kubelet[3157]: I1212 18:42:34.387956 3157 state_mem.go:75] "Updated machine memory state" Dec 12 18:42:34.392657 kubelet[3157]: I1212 18:42:34.392201 3157 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:42:34.392657 kubelet[3157]: I1212 18:42:34.392333 3157 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:42:34.392657 kubelet[3157]: I1212 18:42:34.392342 3157 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:42:34.393538 kubelet[3157]: I1212 18:42:34.393520 3157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:42:34.398590 kubelet[3157]: E1212 18:42:34.395937 3157 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:42:34.409812 kubelet[3157]: I1212 18:42:34.409788 3157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.411124 kubelet[3157]: I1212 18:42:34.411098 3157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.411672 kubelet[3157]: I1212 18:42:34.411654 3157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.423103 kubelet[3157]: W1212 18:42:34.422674 3157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:42:34.429202 kubelet[3157]: W1212 18:42:34.429177 3157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:42:34.429355 kubelet[3157]: E1212 18:42:34.429304 3157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.435478 kubelet[3157]: W1212 18:42:34.435458 3157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:42:34.435576 kubelet[3157]: E1212 18:42:34.435508 3157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-7f4437f45c\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.604347 kubelet[3157]: I1212 18:42:34.494590 3157 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.604347 kubelet[3157]: I1212 18:42:34.506855 3157 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.604347 kubelet[3157]: I1212 18:42:34.604327 3157 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606252 kubelet[3157]: I1212 18:42:34.605953 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf959182df800fac8ad571198237f173-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" (UID: \"cf959182df800fac8ad571198237f173\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606252 kubelet[3157]: I1212 18:42:34.605985 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606252 kubelet[3157]: I1212 18:42:34.606006 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606252 kubelet[3157]: I1212 18:42:34.606022 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606252 kubelet[3157]: I1212 18:42:34.606042 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606440 kubelet[3157]: I1212 18:42:34.606061 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07d972689a96bd08e4a6cf61afec5609-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-7f4437f45c\" (UID: \"07d972689a96bd08e4a6cf61afec5609\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606440 kubelet[3157]: I1212 18:42:34.606084 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf959182df800fac8ad571198237f173-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" (UID: \"cf959182df800fac8ad571198237f173\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606440 kubelet[3157]: I1212 18:42:34.606103 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf959182df800fac8ad571198237f173-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" (UID: \"cf959182df800fac8ad571198237f173\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.606440 kubelet[3157]: I1212 18:42:34.606121 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/595f2956c2173931fec96946556991e3-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-7f4437f45c\" (UID: \"595f2956c2173931fec96946556991e3\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:34.845232 sudo[3182]: pam_unix(sudo:session): session closed for user root Dec 12 18:42:35.285616 kubelet[3157]: I1212 18:42:35.285403 3157 apiserver.go:52] "Watching apiserver" Dec 12 18:42:35.304710 kubelet[3157]: I1212 18:42:35.304662 3157 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:42:35.370655 kubelet[3157]: I1212 18:42:35.370596 3157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:35.371001 kubelet[3157]: I1212 18:42:35.370977 3157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:35.393213 kubelet[3157]: W1212 18:42:35.393184 3157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:42:35.393304 kubelet[3157]: E1212 18:42:35.393244 3157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-7f4437f45c\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:35.404374 kubelet[3157]: W1212 18:42:35.404350 3157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:42:35.404606 kubelet[3157]: E1212 18:42:35.404588 3157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-7f4437f45c\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" Dec 12 18:42:35.422136 kubelet[3157]: I1212 18:42:35.422062 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-7f4437f45c" podStartSLOduration=1.422046345 podStartE2EDuration="1.422046345s" podCreationTimestamp="2025-12-12 18:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:42:35.422032294 +0000 UTC m=+1.214417949" watchObservedRunningTime="2025-12-12 18:42:35.422046345 +0000 UTC m=+1.214432006" Dec 12 18:42:35.422283 kubelet[3157]: I1212 18:42:35.422181 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-7f4437f45c" podStartSLOduration=3.4221744960000002 podStartE2EDuration="3.422174496s" podCreationTimestamp="2025-12-12 18:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:42:35.404967846 +0000 UTC m=+1.197353505" watchObservedRunningTime="2025-12-12 18:42:35.422174496 +0000 UTC m=+1.214560139" Dec 12 18:42:35.443810 kubelet[3157]: I1212 18:42:35.443562 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-7f4437f45c" podStartSLOduration=3.44354657 podStartE2EDuration="3.44354657s" podCreationTimestamp="2025-12-12 18:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:42:35.434265363 +0000 UTC m=+1.226651009" watchObservedRunningTime="2025-12-12 18:42:35.44354657 +0000 UTC m=+1.235932228" Dec 12 18:42:36.260281 sudo[2145]: pam_unix(sudo:session): session closed for user root Dec 12 18:42:36.363258 sshd[2144]: Connection closed by 10.200.16.10 port 52870 Dec 12 18:42:36.363756 sshd-session[2141]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:36.367495 systemd[1]: sshd@6-10.200.4.12:22-10.200.16.10:52870.service: Deactivated successfully. Dec 12 18:42:36.369559 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:42:36.369745 systemd[1]: session-9.scope: Consumed 3.642s CPU time, 269.3M memory peak. Dec 12 18:42:36.372633 systemd-logind[1691]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:42:36.373783 systemd-logind[1691]: Removed session 9. Dec 12 18:42:39.538316 kubelet[3157]: I1212 18:42:39.538254 3157 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:42:39.539106 containerd[1725]: time="2025-12-12T18:42:39.539059152Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:42:39.539974 kubelet[3157]: I1212 18:42:39.539622 3157 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:42:40.591816 kubelet[3157]: I1212 18:42:40.591760 3157 status_manager.go:890] "Failed to get status for pod" podUID="cd8b1de7-5dbf-4325-9af4-dbf747def000" pod="kube-system/kube-proxy-qt6kt" err="pods \"kube-proxy-qt6kt\" is forbidden: User \"system:node:ci-4459.2.2-a-7f4437f45c\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-a-7f4437f45c' and this object" Dec 12 18:42:40.591864 systemd[1]: Created slice kubepods-besteffort-podcd8b1de7_5dbf_4325_9af4_dbf747def000.slice - libcontainer container kubepods-besteffort-podcd8b1de7_5dbf_4325_9af4_dbf747def000.slice. Dec 12 18:42:40.608779 systemd[1]: Created slice kubepods-burstable-podb27074df_f036_4c0a_8771_c75dc95cf26a.slice - libcontainer container kubepods-burstable-podb27074df_f036_4c0a_8771_c75dc95cf26a.slice. Dec 12 18:42:40.635383 systemd[1]: Created slice kubepods-besteffort-podf5c0af83_6dc7_4107_8a9e_0a3cda938f36.slice - libcontainer container kubepods-besteffort-podf5c0af83_6dc7_4107_8a9e_0a3cda938f36.slice. Dec 12 18:42:40.645991 kubelet[3157]: I1212 18:42:40.645788 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc77q\" (UniqueName: \"kubernetes.io/projected/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-kube-api-access-zc77q\") pod \"cilium-operator-6c4d7847fc-pllbw\" (UID: \"f5c0af83-6dc7-4107-8a9e-0a3cda938f36\") " pod="kube-system/cilium-operator-6c4d7847fc-pllbw" Dec 12 18:42:40.646349 kubelet[3157]: I1212 18:42:40.646280 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-hostproc\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.646349 kubelet[3157]: I1212 18:42:40.646325 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-cgroup\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.646607 kubelet[3157]: I1212 18:42:40.646537 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-net\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.646607 kubelet[3157]: I1212 18:42:40.646562 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd8b1de7-5dbf-4325-9af4-dbf747def000-xtables-lock\") pod \"kube-proxy-qt6kt\" (UID: \"cd8b1de7-5dbf-4325-9af4-dbf747def000\") " pod="kube-system/kube-proxy-qt6kt" Dec 12 18:42:40.646607 kubelet[3157]: I1212 18:42:40.646580 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd8b1de7-5dbf-4325-9af4-dbf747def000-lib-modules\") pod \"kube-proxy-qt6kt\" (UID: \"cd8b1de7-5dbf-4325-9af4-dbf747def000\") " pod="kube-system/kube-proxy-qt6kt" Dec 12 18:42:40.646845 kubelet[3157]: I1212 18:42:40.646781 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qwp\" (UniqueName: \"kubernetes.io/projected/cd8b1de7-5dbf-4325-9af4-dbf747def000-kube-api-access-p2qwp\") pod \"kube-proxy-qt6kt\" (UID: \"cd8b1de7-5dbf-4325-9af4-dbf747def000\") " pod="kube-system/kube-proxy-qt6kt" Dec 12 18:42:40.646845 kubelet[3157]: I1212 18:42:40.646801 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-lib-modules\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647060 kubelet[3157]: I1212 18:42:40.646989 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b27074df-f036-4c0a-8771-c75dc95cf26a-clustermesh-secrets\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647060 kubelet[3157]: I1212 18:42:40.647013 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd8b1de7-5dbf-4325-9af4-dbf747def000-kube-proxy\") pod \"kube-proxy-qt6kt\" (UID: \"cd8b1de7-5dbf-4325-9af4-dbf747def000\") " pod="kube-system/kube-proxy-qt6kt" Dec 12 18:42:40.647060 kubelet[3157]: I1212 18:42:40.647039 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-etc-cni-netd\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647310 kubelet[3157]: I1212 18:42:40.647241 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-hubble-tls\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647310 kubelet[3157]: I1212 18:42:40.647268 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pllbw\" (UID: \"f5c0af83-6dc7-4107-8a9e-0a3cda938f36\") " pod="kube-system/cilium-operator-6c4d7847fc-pllbw" Dec 12 18:42:40.647310 kubelet[3157]: I1212 18:42:40.647289 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x57k\" (UniqueName: \"kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-kube-api-access-7x57k\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647506 kubelet[3157]: I1212 18:42:40.647496 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-kernel\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647573 kubelet[3157]: I1212 18:42:40.647566 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-run\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647652 kubelet[3157]: I1212 18:42:40.647644 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cni-path\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647841 kubelet[3157]: I1212 18:42:40.647796 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-config-path\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.647841 kubelet[3157]: I1212 18:42:40.647815 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-bpf-maps\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.648061 kubelet[3157]: I1212 18:42:40.647985 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-xtables-lock\") pod \"cilium-5mjdz\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " pod="kube-system/cilium-5mjdz" Dec 12 18:42:40.901806 containerd[1725]: time="2025-12-12T18:42:40.901703249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt6kt,Uid:cd8b1de7-5dbf-4325-9af4-dbf747def000,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:40.915926 containerd[1725]: time="2025-12-12T18:42:40.915892481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mjdz,Uid:b27074df-f036-4c0a-8771-c75dc95cf26a,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:40.940805 containerd[1725]: time="2025-12-12T18:42:40.940772335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pllbw,Uid:f5c0af83-6dc7-4107-8a9e-0a3cda938f36,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:40.965018 containerd[1725]: time="2025-12-12T18:42:40.964950686Z" level=info msg="connecting to shim 80f12e7acbbb8e694f0b25d424056a91cf562e88573df0c762d205d0a4e85f94" address="unix:///run/containerd/s/5085b254c2e15cf55610545b85c579869e6d1f37385c9da44a66b6352b78aef0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:40.990061 systemd[1]: Started cri-containerd-80f12e7acbbb8e694f0b25d424056a91cf562e88573df0c762d205d0a4e85f94.scope - libcontainer container 80f12e7acbbb8e694f0b25d424056a91cf562e88573df0c762d205d0a4e85f94. Dec 12 18:42:41.017452 containerd[1725]: time="2025-12-12T18:42:41.017037475Z" level=info msg="connecting to shim f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6" address="unix:///run/containerd/s/0ff95394cbce5feb319b7b29b0aadf3b42aa92837828484bea03670572e1a68d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:41.023683 containerd[1725]: time="2025-12-12T18:42:41.023644398Z" level=info msg="connecting to shim 1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e" address="unix:///run/containerd/s/7290dce90f5fc611fa45421b29b0ce8ef4e7dcc1e0cc40310ce0f576d0eec6f1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:41.027006 containerd[1725]: time="2025-12-12T18:42:41.026757750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt6kt,Uid:cd8b1de7-5dbf-4325-9af4-dbf747def000,Namespace:kube-system,Attempt:0,} returns sandbox id \"80f12e7acbbb8e694f0b25d424056a91cf562e88573df0c762d205d0a4e85f94\"" Dec 12 18:42:41.033270 containerd[1725]: time="2025-12-12T18:42:41.033229709Z" level=info msg="CreateContainer within sandbox \"80f12e7acbbb8e694f0b25d424056a91cf562e88573df0c762d205d0a4e85f94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:42:41.056990 systemd[1]: Started cri-containerd-f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6.scope - libcontainer container f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6. Dec 12 18:42:41.061124 systemd[1]: Started cri-containerd-1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e.scope - libcontainer container 1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e. Dec 12 18:42:41.064959 containerd[1725]: time="2025-12-12T18:42:41.064928359Z" level=info msg="Container 6ecb09e3a30925dbe73071462ebeb110d9a74af2147e2baf637a36d4d83af56e: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:41.097001 containerd[1725]: time="2025-12-12T18:42:41.096955346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mjdz,Uid:b27074df-f036-4c0a-8771-c75dc95cf26a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\"" Dec 12 18:42:41.097154 containerd[1725]: time="2025-12-12T18:42:41.096988963Z" level=info msg="CreateContainer within sandbox \"80f12e7acbbb8e694f0b25d424056a91cf562e88573df0c762d205d0a4e85f94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ecb09e3a30925dbe73071462ebeb110d9a74af2147e2baf637a36d4d83af56e\"" Dec 12 18:42:41.097779 containerd[1725]: time="2025-12-12T18:42:41.097753727Z" level=info msg="StartContainer for \"6ecb09e3a30925dbe73071462ebeb110d9a74af2147e2baf637a36d4d83af56e\"" Dec 12 18:42:41.101428 containerd[1725]: time="2025-12-12T18:42:41.101402870Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 18:42:41.102175 containerd[1725]: time="2025-12-12T18:42:41.102146317Z" level=info msg="connecting to shim 6ecb09e3a30925dbe73071462ebeb110d9a74af2147e2baf637a36d4d83af56e" address="unix:///run/containerd/s/5085b254c2e15cf55610545b85c579869e6d1f37385c9da44a66b6352b78aef0" protocol=ttrpc version=3 Dec 12 18:42:41.129974 systemd[1]: Started cri-containerd-6ecb09e3a30925dbe73071462ebeb110d9a74af2147e2baf637a36d4d83af56e.scope - libcontainer container 6ecb09e3a30925dbe73071462ebeb110d9a74af2147e2baf637a36d4d83af56e. Dec 12 18:42:41.158018 containerd[1725]: time="2025-12-12T18:42:41.157924696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pllbw,Uid:f5c0af83-6dc7-4107-8a9e-0a3cda938f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\"" Dec 12 18:42:41.187616 containerd[1725]: time="2025-12-12T18:42:41.187574028Z" level=info msg="StartContainer for \"6ecb09e3a30925dbe73071462ebeb110d9a74af2147e2baf637a36d4d83af56e\" returns successfully" Dec 12 18:42:42.433764 kubelet[3157]: I1212 18:42:42.433690 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qt6kt" podStartSLOduration=2.433669561 podStartE2EDuration="2.433669561s" podCreationTimestamp="2025-12-12 18:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:42:41.392337535 +0000 UTC m=+7.184723221" watchObservedRunningTime="2025-12-12 18:42:42.433669561 +0000 UTC m=+8.226055269" Dec 12 18:42:46.977052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2131774331.mount: Deactivated successfully. Dec 12 18:42:48.675421 containerd[1725]: time="2025-12-12T18:42:48.675372810Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:48.682349 containerd[1725]: time="2025-12-12T18:42:48.682318038Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 12 18:42:48.686842 containerd[1725]: time="2025-12-12T18:42:48.685807305Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:48.687077 containerd[1725]: time="2025-12-12T18:42:48.687053441Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.585616327s" Dec 12 18:42:48.687141 containerd[1725]: time="2025-12-12T18:42:48.687129484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 12 18:42:48.688501 containerd[1725]: time="2025-12-12T18:42:48.688461217Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 18:42:48.689732 containerd[1725]: time="2025-12-12T18:42:48.689704744Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:42:48.715982 containerd[1725]: time="2025-12-12T18:42:48.715954682Z" level=info msg="Container 4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:48.734298 containerd[1725]: time="2025-12-12T18:42:48.734269824Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\"" Dec 12 18:42:48.734919 containerd[1725]: time="2025-12-12T18:42:48.734704633Z" level=info msg="StartContainer for \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\"" Dec 12 18:42:48.735916 containerd[1725]: time="2025-12-12T18:42:48.735880049Z" level=info msg="connecting to shim 4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8" address="unix:///run/containerd/s/0ff95394cbce5feb319b7b29b0aadf3b42aa92837828484bea03670572e1a68d" protocol=ttrpc version=3 Dec 12 18:42:48.756991 systemd[1]: Started cri-containerd-4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8.scope - libcontainer container 4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8. Dec 12 18:42:48.790884 containerd[1725]: time="2025-12-12T18:42:48.790861278Z" level=info msg="StartContainer for \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\" returns successfully" Dec 12 18:42:48.799667 containerd[1725]: time="2025-12-12T18:42:48.798908118Z" level=info msg="received container exit event container_id:\"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\" id:\"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\" pid:3573 exited_at:{seconds:1765564968 nanos:798547016}" Dec 12 18:42:48.799105 systemd[1]: cri-containerd-4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8.scope: Deactivated successfully. Dec 12 18:42:49.712406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8-rootfs.mount: Deactivated successfully. Dec 12 18:42:52.408053 containerd[1725]: time="2025-12-12T18:42:52.407998530Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:42:52.443956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149747803.mount: Deactivated successfully. Dec 12 18:42:52.486632 containerd[1725]: time="2025-12-12T18:42:52.486592769Z" level=info msg="Container 54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:52.520662 containerd[1725]: time="2025-12-12T18:42:52.520626551Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\"" Dec 12 18:42:52.521369 containerd[1725]: time="2025-12-12T18:42:52.521345435Z" level=info msg="StartContainer for \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\"" Dec 12 18:42:52.523557 containerd[1725]: time="2025-12-12T18:42:52.523410352Z" level=info msg="connecting to shim 54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe" address="unix:///run/containerd/s/0ff95394cbce5feb319b7b29b0aadf3b42aa92837828484bea03670572e1a68d" protocol=ttrpc version=3 Dec 12 18:42:52.554138 systemd[1]: Started cri-containerd-54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe.scope - libcontainer container 54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe. Dec 12 18:42:52.585880 containerd[1725]: time="2025-12-12T18:42:52.585818260Z" level=info msg="StartContainer for \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\" returns successfully" Dec 12 18:42:52.599146 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:42:52.599760 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:42:52.599936 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:42:52.602074 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:42:52.602304 systemd[1]: cri-containerd-54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe.scope: Deactivated successfully. Dec 12 18:42:52.605467 containerd[1725]: time="2025-12-12T18:42:52.605431366Z" level=info msg="received container exit event container_id:\"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\" id:\"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\" pid:3630 exited_at:{seconds:1765564972 nanos:603591902}" Dec 12 18:42:52.629040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:42:53.075533 containerd[1725]: time="2025-12-12T18:42:53.075494236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:53.083851 containerd[1725]: time="2025-12-12T18:42:53.082790332Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:53.083851 containerd[1725]: time="2025-12-12T18:42:53.082565004Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 12 18:42:53.083851 containerd[1725]: time="2025-12-12T18:42:53.083745717Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.395002398s" Dec 12 18:42:53.083851 containerd[1725]: time="2025-12-12T18:42:53.083771527Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 12 18:42:53.086275 containerd[1725]: time="2025-12-12T18:42:53.086239836Z" level=info msg="CreateContainer within sandbox \"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 18:42:53.109105 containerd[1725]: time="2025-12-12T18:42:53.109072666Z" level=info msg="Container 40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:53.134650 containerd[1725]: time="2025-12-12T18:42:53.134613568Z" level=info msg="CreateContainer within sandbox \"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\"" Dec 12 18:42:53.135325 containerd[1725]: time="2025-12-12T18:42:53.135289228Z" level=info msg="StartContainer for \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\"" Dec 12 18:42:53.136475 containerd[1725]: time="2025-12-12T18:42:53.136426739Z" level=info msg="connecting to shim 40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d" address="unix:///run/containerd/s/7290dce90f5fc611fa45421b29b0ce8ef4e7dcc1e0cc40310ce0f576d0eec6f1" protocol=ttrpc version=3 Dec 12 18:42:53.155035 systemd[1]: Started cri-containerd-40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d.scope - libcontainer container 40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d. Dec 12 18:42:53.186483 containerd[1725]: time="2025-12-12T18:42:53.186442654Z" level=info msg="StartContainer for \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" returns successfully" Dec 12 18:42:53.412914 containerd[1725]: time="2025-12-12T18:42:53.412389458Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:42:53.437606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe-rootfs.mount: Deactivated successfully. Dec 12 18:42:53.461397 containerd[1725]: time="2025-12-12T18:42:53.461366230Z" level=info msg="Container e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:53.464786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882960512.mount: Deactivated successfully. Dec 12 18:42:53.486274 containerd[1725]: time="2025-12-12T18:42:53.486248215Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\"" Dec 12 18:42:53.486656 containerd[1725]: time="2025-12-12T18:42:53.486625900Z" level=info msg="StartContainer for \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\"" Dec 12 18:42:53.489202 containerd[1725]: time="2025-12-12T18:42:53.489140276Z" level=info msg="connecting to shim e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b" address="unix:///run/containerd/s/0ff95394cbce5feb319b7b29b0aadf3b42aa92837828484bea03670572e1a68d" protocol=ttrpc version=3 Dec 12 18:42:53.530000 systemd[1]: Started cri-containerd-e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b.scope - libcontainer container e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b. Dec 12 18:42:53.617617 containerd[1725]: time="2025-12-12T18:42:53.617546489Z" level=info msg="StartContainer for \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\" returns successfully" Dec 12 18:42:53.629220 systemd[1]: cri-containerd-e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b.scope: Deactivated successfully. Dec 12 18:42:53.633313 containerd[1725]: time="2025-12-12T18:42:53.633138488Z" level=info msg="received container exit event container_id:\"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\" id:\"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\" pid:3718 exited_at:{seconds:1765564973 nanos:632803742}" Dec 12 18:42:53.673669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b-rootfs.mount: Deactivated successfully. Dec 12 18:42:54.424121 containerd[1725]: time="2025-12-12T18:42:54.423930250Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:42:54.441213 kubelet[3157]: I1212 18:42:54.441059 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pllbw" podStartSLOduration=2.520836461 podStartE2EDuration="14.441038389s" podCreationTimestamp="2025-12-12 18:42:40 +0000 UTC" firstStartedPulling="2025-12-12 18:42:41.164299381 +0000 UTC m=+6.956685033" lastFinishedPulling="2025-12-12 18:42:53.084501314 +0000 UTC m=+18.876886961" observedRunningTime="2025-12-12 18:42:53.460566027 +0000 UTC m=+19.252951708" watchObservedRunningTime="2025-12-12 18:42:54.441038389 +0000 UTC m=+20.233424047" Dec 12 18:42:54.458481 containerd[1725]: time="2025-12-12T18:42:54.458447894Z" level=info msg="Container e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:54.464767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701268346.mount: Deactivated successfully. Dec 12 18:42:54.484260 containerd[1725]: time="2025-12-12T18:42:54.484223881Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\"" Dec 12 18:42:54.484788 containerd[1725]: time="2025-12-12T18:42:54.484766179Z" level=info msg="StartContainer for \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\"" Dec 12 18:42:54.485630 containerd[1725]: time="2025-12-12T18:42:54.485590297Z" level=info msg="connecting to shim e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1" address="unix:///run/containerd/s/0ff95394cbce5feb319b7b29b0aadf3b42aa92837828484bea03670572e1a68d" protocol=ttrpc version=3 Dec 12 18:42:54.504985 systemd[1]: Started cri-containerd-e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1.scope - libcontainer container e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1. Dec 12 18:42:54.526153 systemd[1]: cri-containerd-e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1.scope: Deactivated successfully. Dec 12 18:42:54.530445 containerd[1725]: time="2025-12-12T18:42:54.530386513Z" level=info msg="received container exit event container_id:\"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\" id:\"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\" pid:3758 exited_at:{seconds:1765564974 nanos:526484928}" Dec 12 18:42:54.537345 containerd[1725]: time="2025-12-12T18:42:54.537318087Z" level=info msg="StartContainer for \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\" returns successfully" Dec 12 18:42:54.547956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1-rootfs.mount: Deactivated successfully. Dec 12 18:42:55.429869 containerd[1725]: time="2025-12-12T18:42:55.429402598Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:42:55.459910 containerd[1725]: time="2025-12-12T18:42:55.457129090Z" level=info msg="Container 9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:55.462864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055678000.mount: Deactivated successfully. Dec 12 18:42:55.479505 containerd[1725]: time="2025-12-12T18:42:55.479280632Z" level=info msg="CreateContainer within sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\"" Dec 12 18:42:55.480994 containerd[1725]: time="2025-12-12T18:42:55.480418348Z" level=info msg="StartContainer for \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\"" Dec 12 18:42:55.482109 containerd[1725]: time="2025-12-12T18:42:55.482071529Z" level=info msg="connecting to shim 9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff" address="unix:///run/containerd/s/0ff95394cbce5feb319b7b29b0aadf3b42aa92837828484bea03670572e1a68d" protocol=ttrpc version=3 Dec 12 18:42:55.519992 systemd[1]: Started cri-containerd-9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff.scope - libcontainer container 9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff. Dec 12 18:42:55.598489 containerd[1725]: time="2025-12-12T18:42:55.598455425Z" level=info msg="StartContainer for \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" returns successfully" Dec 12 18:42:55.677993 kubelet[3157]: I1212 18:42:55.677973 3157 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:42:55.735283 systemd[1]: Created slice kubepods-burstable-pod411cfd61_8615_48eb_987b_b47caf5fddd0.slice - libcontainer container kubepods-burstable-pod411cfd61_8615_48eb_987b_b47caf5fddd0.slice. Dec 12 18:42:55.742686 systemd[1]: Created slice kubepods-burstable-pod835baf81_5cf6_4831_9f6a_c4e3be281185.slice - libcontainer container kubepods-burstable-pod835baf81_5cf6_4831_9f6a_c4e3be281185.slice. Dec 12 18:42:55.758114 kubelet[3157]: I1212 18:42:55.758084 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jllf\" (UniqueName: \"kubernetes.io/projected/835baf81-5cf6-4831-9f6a-c4e3be281185-kube-api-access-8jllf\") pod \"coredns-668d6bf9bc-m6zch\" (UID: \"835baf81-5cf6-4831-9f6a-c4e3be281185\") " pod="kube-system/coredns-668d6bf9bc-m6zch" Dec 12 18:42:55.758225 kubelet[3157]: I1212 18:42:55.758129 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411cfd61-8615-48eb-987b-b47caf5fddd0-config-volume\") pod \"coredns-668d6bf9bc-hcttz\" (UID: \"411cfd61-8615-48eb-987b-b47caf5fddd0\") " pod="kube-system/coredns-668d6bf9bc-hcttz" Dec 12 18:42:55.758225 kubelet[3157]: I1212 18:42:55.758153 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnld\" (UniqueName: \"kubernetes.io/projected/411cfd61-8615-48eb-987b-b47caf5fddd0-kube-api-access-7tnld\") pod \"coredns-668d6bf9bc-hcttz\" (UID: \"411cfd61-8615-48eb-987b-b47caf5fddd0\") " pod="kube-system/coredns-668d6bf9bc-hcttz" Dec 12 18:42:55.758225 kubelet[3157]: I1212 18:42:55.758174 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/835baf81-5cf6-4831-9f6a-c4e3be281185-config-volume\") pod \"coredns-668d6bf9bc-m6zch\" (UID: \"835baf81-5cf6-4831-9f6a-c4e3be281185\") " pod="kube-system/coredns-668d6bf9bc-m6zch" Dec 12 18:42:56.040853 containerd[1725]: time="2025-12-12T18:42:56.040740882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hcttz,Uid:411cfd61-8615-48eb-987b-b47caf5fddd0,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:56.048803 containerd[1725]: time="2025-12-12T18:42:56.048574853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6zch,Uid:835baf81-5cf6-4831-9f6a-c4e3be281185,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:56.447317 kubelet[3157]: I1212 18:42:56.446989 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5mjdz" podStartSLOduration=8.85945365 podStartE2EDuration="16.446966593s" podCreationTimestamp="2025-12-12 18:42:40 +0000 UTC" firstStartedPulling="2025-12-12 18:42:41.100330195 +0000 UTC m=+6.892715848" lastFinishedPulling="2025-12-12 18:42:48.687843144 +0000 UTC m=+14.480228791" observedRunningTime="2025-12-12 18:42:56.445470919 +0000 UTC m=+22.237856574" watchObservedRunningTime="2025-12-12 18:42:56.446966593 +0000 UTC m=+22.239352248" Dec 12 18:42:57.729588 systemd-networkd[1344]: cilium_host: Link UP Dec 12 18:42:57.733164 systemd-networkd[1344]: cilium_net: Link UP Dec 12 18:42:57.734599 systemd-networkd[1344]: cilium_net: Gained carrier Dec 12 18:42:57.735861 systemd-networkd[1344]: cilium_host: Gained carrier Dec 12 18:42:57.814896 systemd-networkd[1344]: cilium_host: Gained IPv6LL Dec 12 18:42:57.899185 systemd-networkd[1344]: cilium_vxlan: Link UP Dec 12 18:42:57.899192 systemd-networkd[1344]: cilium_vxlan: Gained carrier Dec 12 18:42:58.134912 kernel: NET: Registered PF_ALG protocol family Dec 12 18:42:58.710177 systemd-networkd[1344]: lxc_health: Link UP Dec 12 18:42:58.715966 systemd-networkd[1344]: lxc_health: Gained carrier Dec 12 18:42:58.734924 systemd-networkd[1344]: cilium_net: Gained IPv6LL Dec 12 18:42:59.084931 kernel: eth0: renamed from tmp81b47 Dec 12 18:42:59.089530 systemd-networkd[1344]: lxcfa3d6093c29d: Link UP Dec 12 18:42:59.089793 systemd-networkd[1344]: lxcfa3d6093c29d: Gained carrier Dec 12 18:42:59.100049 systemd-networkd[1344]: lxc777fbad99800: Link UP Dec 12 18:42:59.107920 kernel: eth0: renamed from tmp31f00 Dec 12 18:42:59.110464 systemd-networkd[1344]: lxc777fbad99800: Gained carrier Dec 12 18:42:59.567081 systemd-networkd[1344]: cilium_vxlan: Gained IPv6LL Dec 12 18:43:00.143038 systemd-networkd[1344]: lxc_health: Gained IPv6LL Dec 12 18:43:00.591123 systemd-networkd[1344]: lxcfa3d6093c29d: Gained IPv6LL Dec 12 18:43:00.847077 systemd-networkd[1344]: lxc777fbad99800: Gained IPv6LL Dec 12 18:43:02.387756 containerd[1725]: time="2025-12-12T18:43:02.387221523Z" level=info msg="connecting to shim 81b4787c41efbdfe849a9e39ccdf12561f4e4d3d00f98931d514c7008cf0b2c9" address="unix:///run/containerd/s/03b2edfe4f9bb2dadc2c597c508cdcd6e80172830801adaaf4f5f672613ae1c5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:02.398765 containerd[1725]: time="2025-12-12T18:43:02.398560696Z" level=info msg="connecting to shim 31f00a296fcbd81e31c5f628a2b440bcdd278ac3dee77f254fb465a8144e9a02" address="unix:///run/containerd/s/0f53373d3859f4a24982351e1bf87b4fc8e2938beb76a44ef22d20147c4ddf5e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:02.446017 systemd[1]: Started cri-containerd-81b4787c41efbdfe849a9e39ccdf12561f4e4d3d00f98931d514c7008cf0b2c9.scope - libcontainer container 81b4787c41efbdfe849a9e39ccdf12561f4e4d3d00f98931d514c7008cf0b2c9. Dec 12 18:43:02.451047 systemd[1]: Started cri-containerd-31f00a296fcbd81e31c5f628a2b440bcdd278ac3dee77f254fb465a8144e9a02.scope - libcontainer container 31f00a296fcbd81e31c5f628a2b440bcdd278ac3dee77f254fb465a8144e9a02. Dec 12 18:43:02.513167 containerd[1725]: time="2025-12-12T18:43:02.513133593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6zch,Uid:835baf81-5cf6-4831-9f6a-c4e3be281185,Namespace:kube-system,Attempt:0,} returns sandbox id \"31f00a296fcbd81e31c5f628a2b440bcdd278ac3dee77f254fb465a8144e9a02\"" Dec 12 18:43:02.520796 containerd[1725]: time="2025-12-12T18:43:02.520756865Z" level=info msg="CreateContainer within sandbox \"31f00a296fcbd81e31c5f628a2b440bcdd278ac3dee77f254fb465a8144e9a02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:43:02.523446 containerd[1725]: time="2025-12-12T18:43:02.523393503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hcttz,Uid:411cfd61-8615-48eb-987b-b47caf5fddd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"81b4787c41efbdfe849a9e39ccdf12561f4e4d3d00f98931d514c7008cf0b2c9\"" Dec 12 18:43:02.530311 containerd[1725]: time="2025-12-12T18:43:02.530287709Z" level=info msg="CreateContainer within sandbox \"81b4787c41efbdfe849a9e39ccdf12561f4e4d3d00f98931d514c7008cf0b2c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:43:02.545640 containerd[1725]: time="2025-12-12T18:43:02.545490370Z" level=info msg="Container 16b6e874a7fe41f56a441289bca725e6df7a8db2b143e1075f48933b59fa607e: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:02.560653 containerd[1725]: time="2025-12-12T18:43:02.560623084Z" level=info msg="Container d3ae9fb485040dc2e9f97e92b5c3d45854c6c6ee5088d9fb9a3d8b1a4a173067: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:02.570819 containerd[1725]: time="2025-12-12T18:43:02.570792432Z" level=info msg="CreateContainer within sandbox \"31f00a296fcbd81e31c5f628a2b440bcdd278ac3dee77f254fb465a8144e9a02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16b6e874a7fe41f56a441289bca725e6df7a8db2b143e1075f48933b59fa607e\"" Dec 12 18:43:02.571352 containerd[1725]: time="2025-12-12T18:43:02.571319656Z" level=info msg="StartContainer for \"16b6e874a7fe41f56a441289bca725e6df7a8db2b143e1075f48933b59fa607e\"" Dec 12 18:43:02.572351 containerd[1725]: time="2025-12-12T18:43:02.572319177Z" level=info msg="connecting to shim 16b6e874a7fe41f56a441289bca725e6df7a8db2b143e1075f48933b59fa607e" address="unix:///run/containerd/s/0f53373d3859f4a24982351e1bf87b4fc8e2938beb76a44ef22d20147c4ddf5e" protocol=ttrpc version=3 Dec 12 18:43:02.580555 containerd[1725]: time="2025-12-12T18:43:02.580520894Z" level=info msg="CreateContainer within sandbox \"81b4787c41efbdfe849a9e39ccdf12561f4e4d3d00f98931d514c7008cf0b2c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3ae9fb485040dc2e9f97e92b5c3d45854c6c6ee5088d9fb9a3d8b1a4a173067\"" Dec 12 18:43:02.582366 containerd[1725]: time="2025-12-12T18:43:02.581103556Z" level=info msg="StartContainer for \"d3ae9fb485040dc2e9f97e92b5c3d45854c6c6ee5088d9fb9a3d8b1a4a173067\"" Dec 12 18:43:02.582366 containerd[1725]: time="2025-12-12T18:43:02.582062831Z" level=info msg="connecting to shim d3ae9fb485040dc2e9f97e92b5c3d45854c6c6ee5088d9fb9a3d8b1a4a173067" address="unix:///run/containerd/s/03b2edfe4f9bb2dadc2c597c508cdcd6e80172830801adaaf4f5f672613ae1c5" protocol=ttrpc version=3 Dec 12 18:43:02.593098 systemd[1]: Started cri-containerd-16b6e874a7fe41f56a441289bca725e6df7a8db2b143e1075f48933b59fa607e.scope - libcontainer container 16b6e874a7fe41f56a441289bca725e6df7a8db2b143e1075f48933b59fa607e. Dec 12 18:43:02.604964 systemd[1]: Started cri-containerd-d3ae9fb485040dc2e9f97e92b5c3d45854c6c6ee5088d9fb9a3d8b1a4a173067.scope - libcontainer container d3ae9fb485040dc2e9f97e92b5c3d45854c6c6ee5088d9fb9a3d8b1a4a173067. Dec 12 18:43:02.645574 containerd[1725]: time="2025-12-12T18:43:02.645480810Z" level=info msg="StartContainer for \"d3ae9fb485040dc2e9f97e92b5c3d45854c6c6ee5088d9fb9a3d8b1a4a173067\" returns successfully" Dec 12 18:43:02.646135 containerd[1725]: time="2025-12-12T18:43:02.645894984Z" level=info msg="StartContainer for \"16b6e874a7fe41f56a441289bca725e6df7a8db2b143e1075f48933b59fa607e\" returns successfully" Dec 12 18:43:03.480003 kubelet[3157]: I1212 18:43:03.479710 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m6zch" podStartSLOduration=23.479689768 podStartE2EDuration="23.479689768s" podCreationTimestamp="2025-12-12 18:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:03.463697905 +0000 UTC m=+29.256083561" watchObservedRunningTime="2025-12-12 18:43:03.479689768 +0000 UTC m=+29.272075421" Dec 12 18:43:03.498021 kubelet[3157]: I1212 18:43:03.497976 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hcttz" podStartSLOduration=23.497940089 podStartE2EDuration="23.497940089s" podCreationTimestamp="2025-12-12 18:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:03.497121159 +0000 UTC m=+29.289506819" watchObservedRunningTime="2025-12-12 18:43:03.497940089 +0000 UTC m=+29.290325745" Dec 12 18:44:08.107507 systemd[1]: Started sshd@7-10.200.4.12:22-10.200.16.10:57058.service - OpenSSH per-connection server daemon (10.200.16.10:57058). Dec 12 18:44:08.702580 sshd[4477]: Accepted publickey for core from 10.200.16.10 port 57058 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:08.703762 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:08.709081 systemd-logind[1691]: New session 10 of user core. Dec 12 18:44:08.718004 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:44:09.187433 sshd[4480]: Connection closed by 10.200.16.10 port 57058 Dec 12 18:44:09.188014 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:09.191868 systemd[1]: sshd@7-10.200.4.12:22-10.200.16.10:57058.service: Deactivated successfully. Dec 12 18:44:09.193918 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:44:09.194638 systemd-logind[1691]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:44:09.195934 systemd-logind[1691]: Removed session 10. Dec 12 18:44:14.300395 systemd[1]: Started sshd@8-10.200.4.12:22-10.200.16.10:60802.service - OpenSSH per-connection server daemon (10.200.16.10:60802). Dec 12 18:44:14.901139 sshd[4495]: Accepted publickey for core from 10.200.16.10 port 60802 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:14.902365 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:14.906982 systemd-logind[1691]: New session 11 of user core. Dec 12 18:44:14.911013 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:44:15.394068 sshd[4498]: Connection closed by 10.200.16.10 port 60802 Dec 12 18:44:15.394633 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:15.397643 systemd[1]: sshd@8-10.200.4.12:22-10.200.16.10:60802.service: Deactivated successfully. Dec 12 18:44:15.399905 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:44:15.402006 systemd-logind[1691]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:44:15.403109 systemd-logind[1691]: Removed session 11. Dec 12 18:44:20.506177 systemd[1]: Started sshd@9-10.200.4.12:22-10.200.16.10:34664.service - OpenSSH per-connection server daemon (10.200.16.10:34664). Dec 12 18:44:21.103368 sshd[4510]: Accepted publickey for core from 10.200.16.10 port 34664 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:21.104643 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:21.109249 systemd-logind[1691]: New session 12 of user core. Dec 12 18:44:21.116986 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:44:21.582251 sshd[4513]: Connection closed by 10.200.16.10 port 34664 Dec 12 18:44:21.583074 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:21.587266 systemd[1]: sshd@9-10.200.4.12:22-10.200.16.10:34664.service: Deactivated successfully. Dec 12 18:44:21.589184 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:44:21.590185 systemd-logind[1691]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:44:21.591414 systemd-logind[1691]: Removed session 12. Dec 12 18:44:26.689176 systemd[1]: Started sshd@10-10.200.4.12:22-10.200.16.10:34676.service - OpenSSH per-connection server daemon (10.200.16.10:34676). Dec 12 18:44:27.287233 sshd[4526]: Accepted publickey for core from 10.200.16.10 port 34676 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:27.288540 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:27.292921 systemd-logind[1691]: New session 13 of user core. Dec 12 18:44:27.296012 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:44:27.770938 sshd[4529]: Connection closed by 10.200.16.10 port 34676 Dec 12 18:44:27.772028 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:27.774929 systemd[1]: sshd@10-10.200.4.12:22-10.200.16.10:34676.service: Deactivated successfully. Dec 12 18:44:27.777329 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:44:27.779816 systemd-logind[1691]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:44:27.780754 systemd-logind[1691]: Removed session 13. Dec 12 18:44:27.881806 systemd[1]: Started sshd@11-10.200.4.12:22-10.200.16.10:34684.service - OpenSSH per-connection server daemon (10.200.16.10:34684). Dec 12 18:44:28.488081 sshd[4542]: Accepted publickey for core from 10.200.16.10 port 34684 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:28.489284 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:28.493804 systemd-logind[1691]: New session 14 of user core. Dec 12 18:44:28.497984 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:44:29.011134 sshd[4545]: Connection closed by 10.200.16.10 port 34684 Dec 12 18:44:29.011449 sshd-session[4542]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:29.014566 systemd[1]: sshd@11-10.200.4.12:22-10.200.16.10:34684.service: Deactivated successfully. Dec 12 18:44:29.016788 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:44:29.018657 systemd-logind[1691]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:44:29.019755 systemd-logind[1691]: Removed session 14. Dec 12 18:44:29.135064 systemd[1]: Started sshd@12-10.200.4.12:22-10.200.16.10:34700.service - OpenSSH per-connection server daemon (10.200.16.10:34700). Dec 12 18:44:29.734626 sshd[4555]: Accepted publickey for core from 10.200.16.10 port 34700 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:29.735078 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:29.739648 systemd-logind[1691]: New session 15 of user core. Dec 12 18:44:29.747005 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:44:30.215410 sshd[4558]: Connection closed by 10.200.16.10 port 34700 Dec 12 18:44:30.216031 sshd-session[4555]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:30.220132 systemd-logind[1691]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:44:30.220298 systemd[1]: sshd@12-10.200.4.12:22-10.200.16.10:34700.service: Deactivated successfully. Dec 12 18:44:30.223001 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:44:30.224325 systemd-logind[1691]: Removed session 15. Dec 12 18:44:35.339292 systemd[1]: Started sshd@13-10.200.4.12:22-10.200.16.10:57202.service - OpenSSH per-connection server daemon (10.200.16.10:57202). Dec 12 18:44:35.938337 sshd[4572]: Accepted publickey for core from 10.200.16.10 port 57202 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:35.939555 sshd-session[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:35.944192 systemd-logind[1691]: New session 16 of user core. Dec 12 18:44:35.950971 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:44:36.434240 sshd[4575]: Connection closed by 10.200.16.10 port 57202 Dec 12 18:44:36.434878 sshd-session[4572]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:36.438020 systemd[1]: sshd@13-10.200.4.12:22-10.200.16.10:57202.service: Deactivated successfully. Dec 12 18:44:36.439965 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:44:36.441301 systemd-logind[1691]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:44:36.442695 systemd-logind[1691]: Removed session 16. Dec 12 18:44:36.542795 systemd[1]: Started sshd@14-10.200.4.12:22-10.200.16.10:57208.service - OpenSSH per-connection server daemon (10.200.16.10:57208). Dec 12 18:44:37.145844 sshd[4587]: Accepted publickey for core from 10.200.16.10 port 57208 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:37.147205 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:37.152064 systemd-logind[1691]: New session 17 of user core. Dec 12 18:44:37.160976 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:44:37.847199 sshd[4590]: Connection closed by 10.200.16.10 port 57208 Dec 12 18:44:37.847789 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:37.851781 systemd[1]: sshd@14-10.200.4.12:22-10.200.16.10:57208.service: Deactivated successfully. Dec 12 18:44:37.853803 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:44:37.855171 systemd-logind[1691]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:44:37.856706 systemd-logind[1691]: Removed session 17. Dec 12 18:44:37.955960 systemd[1]: Started sshd@15-10.200.4.12:22-10.200.16.10:57212.service - OpenSSH per-connection server daemon (10.200.16.10:57212). Dec 12 18:44:38.560820 sshd[4600]: Accepted publickey for core from 10.200.16.10 port 57212 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:38.563273 sshd-session[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:38.568001 systemd-logind[1691]: New session 18 of user core. Dec 12 18:44:38.574996 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:44:39.528962 sshd[4603]: Connection closed by 10.200.16.10 port 57212 Dec 12 18:44:39.529558 sshd-session[4600]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:39.533439 systemd[1]: sshd@15-10.200.4.12:22-10.200.16.10:57212.service: Deactivated successfully. Dec 12 18:44:39.535465 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:44:39.536422 systemd-logind[1691]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:44:39.538269 systemd-logind[1691]: Removed session 18. Dec 12 18:44:39.635232 systemd[1]: Started sshd@16-10.200.4.12:22-10.200.16.10:57218.service - OpenSSH per-connection server daemon (10.200.16.10:57218). Dec 12 18:44:40.240610 sshd[4620]: Accepted publickey for core from 10.200.16.10 port 57218 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:40.241791 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:40.246763 systemd-logind[1691]: New session 19 of user core. Dec 12 18:44:40.253007 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:44:40.818090 sshd[4623]: Connection closed by 10.200.16.10 port 57218 Dec 12 18:44:40.818619 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:40.822211 systemd[1]: sshd@16-10.200.4.12:22-10.200.16.10:57218.service: Deactivated successfully. Dec 12 18:44:40.824027 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:44:40.825397 systemd-logind[1691]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:44:40.826779 systemd-logind[1691]: Removed session 19. Dec 12 18:44:40.933931 systemd[1]: Started sshd@17-10.200.4.12:22-10.200.16.10:60506.service - OpenSSH per-connection server daemon (10.200.16.10:60506). Dec 12 18:44:41.532062 sshd[4632]: Accepted publickey for core from 10.200.16.10 port 60506 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:41.533205 sshd-session[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:41.537794 systemd-logind[1691]: New session 20 of user core. Dec 12 18:44:41.544998 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:44:42.015815 sshd[4637]: Connection closed by 10.200.16.10 port 60506 Dec 12 18:44:42.017534 sshd-session[4632]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:42.020210 systemd[1]: sshd@17-10.200.4.12:22-10.200.16.10:60506.service: Deactivated successfully. Dec 12 18:44:42.022252 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:44:42.024053 systemd-logind[1691]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:44:42.025591 systemd-logind[1691]: Removed session 20. Dec 12 18:44:47.126358 systemd[1]: Started sshd@18-10.200.4.12:22-10.200.16.10:60510.service - OpenSSH per-connection server daemon (10.200.16.10:60510). Dec 12 18:44:47.724500 sshd[4651]: Accepted publickey for core from 10.200.16.10 port 60510 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:47.725696 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:47.729890 systemd-logind[1691]: New session 21 of user core. Dec 12 18:44:47.737991 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:44:48.198439 sshd[4654]: Connection closed by 10.200.16.10 port 60510 Dec 12 18:44:48.198983 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:48.202410 systemd[1]: sshd@18-10.200.4.12:22-10.200.16.10:60510.service: Deactivated successfully. Dec 12 18:44:48.204227 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:44:48.205670 systemd-logind[1691]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:44:48.207297 systemd-logind[1691]: Removed session 21. Dec 12 18:44:53.310179 systemd[1]: Started sshd@19-10.200.4.12:22-10.200.16.10:40536.service - OpenSSH per-connection server daemon (10.200.16.10:40536). Dec 12 18:44:53.911069 sshd[4666]: Accepted publickey for core from 10.200.16.10 port 40536 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:44:53.913165 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:44:53.917912 systemd-logind[1691]: New session 22 of user core. Dec 12 18:44:53.925994 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:44:54.389511 sshd[4669]: Connection closed by 10.200.16.10 port 40536 Dec 12 18:44:54.389897 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:54.393271 systemd-logind[1691]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:44:54.394035 systemd[1]: sshd@19-10.200.4.12:22-10.200.16.10:40536.service: Deactivated successfully. Dec 12 18:44:54.397491 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:44:54.401081 systemd-logind[1691]: Removed session 22. Dec 12 18:44:59.500161 systemd[1]: Started sshd@20-10.200.4.12:22-10.200.16.10:40546.service - OpenSSH per-connection server daemon (10.200.16.10:40546). Dec 12 18:45:00.102379 sshd[4681]: Accepted publickey for core from 10.200.16.10 port 40546 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:45:00.103640 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:00.108559 systemd-logind[1691]: New session 23 of user core. Dec 12 18:45:00.113988 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:45:00.581908 sshd[4684]: Connection closed by 10.200.16.10 port 40546 Dec 12 18:45:00.582753 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:00.586562 systemd[1]: sshd@20-10.200.4.12:22-10.200.16.10:40546.service: Deactivated successfully. Dec 12 18:45:00.588699 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:45:00.589490 systemd-logind[1691]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:45:00.591447 systemd-logind[1691]: Removed session 23. Dec 12 18:45:00.693241 systemd[1]: Started sshd@21-10.200.4.12:22-10.200.16.10:46004.service - OpenSSH per-connection server daemon (10.200.16.10:46004). Dec 12 18:45:01.296180 sshd[4696]: Accepted publickey for core from 10.200.16.10 port 46004 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:45:01.297530 sshd-session[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:01.302726 systemd-logind[1691]: New session 24 of user core. Dec 12 18:45:01.309005 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:45:02.940742 containerd[1725]: time="2025-12-12T18:45:02.940156769Z" level=info msg="StopContainer for \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" with timeout 30 (s)" Dec 12 18:45:02.941701 containerd[1725]: time="2025-12-12T18:45:02.941660250Z" level=info msg="Stop container \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" with signal terminated" Dec 12 18:45:02.955718 systemd[1]: cri-containerd-40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d.scope: Deactivated successfully. Dec 12 18:45:02.959373 containerd[1725]: time="2025-12-12T18:45:02.959291679Z" level=info msg="received container exit event container_id:\"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" id:\"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" pid:3686 exited_at:{seconds:1765565102 nanos:958948601}" Dec 12 18:45:02.970140 containerd[1725]: time="2025-12-12T18:45:02.970075446Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:45:02.978345 containerd[1725]: time="2025-12-12T18:45:02.978298759Z" level=info msg="StopContainer for \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" with timeout 2 (s)" Dec 12 18:45:02.978650 containerd[1725]: time="2025-12-12T18:45:02.978622297Z" level=info msg="Stop container \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" with signal terminated" Dec 12 18:45:02.988129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d-rootfs.mount: Deactivated successfully. Dec 12 18:45:02.992463 systemd-networkd[1344]: lxc_health: Link DOWN Dec 12 18:45:02.992469 systemd-networkd[1344]: lxc_health: Lost carrier Dec 12 18:45:03.008707 systemd[1]: cri-containerd-9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff.scope: Deactivated successfully. Dec 12 18:45:03.009206 systemd[1]: cri-containerd-9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff.scope: Consumed 5.965s CPU time, 125M memory peak, 128K read from disk, 13.3M written to disk. Dec 12 18:45:03.010105 containerd[1725]: time="2025-12-12T18:45:03.010004723Z" level=info msg="received container exit event container_id:\"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" id:\"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" pid:3794 exited_at:{seconds:1765565103 nanos:9728657}" Dec 12 18:45:03.027742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff-rootfs.mount: Deactivated successfully. Dec 12 18:45:03.073686 containerd[1725]: time="2025-12-12T18:45:03.073656869Z" level=info msg="StopContainer for \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" returns successfully" Dec 12 18:45:03.074354 containerd[1725]: time="2025-12-12T18:45:03.074317680Z" level=info msg="StopPodSandbox for \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\"" Dec 12 18:45:03.074490 containerd[1725]: time="2025-12-12T18:45:03.074468623Z" level=info msg="Container to stop \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:45:03.074490 containerd[1725]: time="2025-12-12T18:45:03.074486710Z" level=info msg="Container to stop \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:45:03.074571 containerd[1725]: time="2025-12-12T18:45:03.074496722Z" level=info msg="Container to stop \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:45:03.074571 containerd[1725]: time="2025-12-12T18:45:03.074507328Z" level=info msg="Container to stop \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:45:03.074571 containerd[1725]: time="2025-12-12T18:45:03.074517690Z" level=info msg="Container to stop \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:45:03.080332 containerd[1725]: time="2025-12-12T18:45:03.080244532Z" level=info msg="StopContainer for \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" returns successfully" Dec 12 18:45:03.080741 containerd[1725]: time="2025-12-12T18:45:03.080719135Z" level=info msg="StopPodSandbox for \"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\"" Dec 12 18:45:03.080810 containerd[1725]: time="2025-12-12T18:45:03.080795846Z" level=info msg="Container to stop \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:45:03.082632 systemd[1]: cri-containerd-f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6.scope: Deactivated successfully. Dec 12 18:45:03.086128 containerd[1725]: time="2025-12-12T18:45:03.085395811Z" level=info msg="received sandbox exit event container_id:\"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" id:\"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" exit_status:137 exited_at:{seconds:1765565103 nanos:85016500}" monitor_name=podsandbox Dec 12 18:45:03.093240 systemd[1]: cri-containerd-1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e.scope: Deactivated successfully. Dec 12 18:45:03.100858 containerd[1725]: time="2025-12-12T18:45:03.100788942Z" level=info msg="received sandbox exit event container_id:\"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\" id:\"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\" exit_status:137 exited_at:{seconds:1765565103 nanos:99732703}" monitor_name=podsandbox Dec 12 18:45:03.110397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6-rootfs.mount: Deactivated successfully. Dec 12 18:45:03.123613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e-rootfs.mount: Deactivated successfully. Dec 12 18:45:03.138457 containerd[1725]: time="2025-12-12T18:45:03.138378941Z" level=info msg="shim disconnected" id=f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6 namespace=k8s.io Dec 12 18:45:03.138457 containerd[1725]: time="2025-12-12T18:45:03.138423847Z" level=warning msg="cleaning up after shim disconnected" id=f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6 namespace=k8s.io Dec 12 18:45:03.138740 containerd[1725]: time="2025-12-12T18:45:03.138432499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:45:03.138740 containerd[1725]: time="2025-12-12T18:45:03.138678152Z" level=info msg="shim disconnected" id=1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e namespace=k8s.io Dec 12 18:45:03.138740 containerd[1725]: time="2025-12-12T18:45:03.138692514Z" level=warning msg="cleaning up after shim disconnected" id=1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e namespace=k8s.io Dec 12 18:45:03.138740 containerd[1725]: time="2025-12-12T18:45:03.138700376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:45:03.153880 containerd[1725]: time="2025-12-12T18:45:03.153500175Z" level=info msg="received sandbox container exit event sandbox_id:\"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\" exit_status:137 exited_at:{seconds:1765565103 nanos:99732703}" monitor_name=criService Dec 12 18:45:03.154888 containerd[1725]: time="2025-12-12T18:45:03.154196489Z" level=info msg="TearDown network for sandbox \"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\" successfully" Dec 12 18:45:03.154997 containerd[1725]: time="2025-12-12T18:45:03.154976548Z" level=info msg="StopPodSandbox for \"1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e\" returns successfully" Dec 12 18:45:03.156447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1228e24d63dcd2dd9e8d690109a863cdbe1f1268bbb8b85028e7dc0178a6ee6e-shm.mount: Deactivated successfully. Dec 12 18:45:03.158845 containerd[1725]: time="2025-12-12T18:45:03.158545885Z" level=info msg="received sandbox container exit event sandbox_id:\"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" exit_status:137 exited_at:{seconds:1765565103 nanos:85016500}" monitor_name=criService Dec 12 18:45:03.159142 containerd[1725]: time="2025-12-12T18:45:03.159108927Z" level=info msg="TearDown network for sandbox \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" successfully" Dec 12 18:45:03.159229 containerd[1725]: time="2025-12-12T18:45:03.159217501Z" level=info msg="StopPodSandbox for \"f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6\" returns successfully" Dec 12 18:45:03.321667 kubelet[3157]: I1212 18:45:03.321574 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-cgroup\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.321667 kubelet[3157]: I1212 18:45:03.321608 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-etc-cni-netd\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.321667 kubelet[3157]: I1212 18:45:03.321626 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-kernel\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.321667 kubelet[3157]: I1212 18:45:03.321643 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-run\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.321667 kubelet[3157]: I1212 18:45:03.321667 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-config-path\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322113 kubelet[3157]: I1212 18:45:03.321687 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-xtables-lock\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322113 kubelet[3157]: I1212 18:45:03.321702 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-hostproc\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322113 kubelet[3157]: I1212 18:45:03.321720 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-hubble-tls\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322113 kubelet[3157]: I1212 18:45:03.321739 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc77q\" (UniqueName: \"kubernetes.io/projected/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-kube-api-access-zc77q\") pod \"f5c0af83-6dc7-4107-8a9e-0a3cda938f36\" (UID: \"f5c0af83-6dc7-4107-8a9e-0a3cda938f36\") " Dec 12 18:45:03.322113 kubelet[3157]: I1212 18:45:03.321761 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-net\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322113 kubelet[3157]: I1212 18:45:03.321780 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x57k\" (UniqueName: \"kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-kube-api-access-7x57k\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322269 kubelet[3157]: I1212 18:45:03.321796 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-lib-modules\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322269 kubelet[3157]: I1212 18:45:03.321815 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b27074df-f036-4c0a-8771-c75dc95cf26a-clustermesh-secrets\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322269 kubelet[3157]: I1212 18:45:03.321848 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-cilium-config-path\") pod \"f5c0af83-6dc7-4107-8a9e-0a3cda938f36\" (UID: \"f5c0af83-6dc7-4107-8a9e-0a3cda938f36\") " Dec 12 18:45:03.322269 kubelet[3157]: I1212 18:45:03.321867 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cni-path\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322269 kubelet[3157]: I1212 18:45:03.321884 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-bpf-maps\") pod \"b27074df-f036-4c0a-8771-c75dc95cf26a\" (UID: \"b27074df-f036-4c0a-8771-c75dc95cf26a\") " Dec 12 18:45:03.322269 kubelet[3157]: I1212 18:45:03.321956 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.322419 kubelet[3157]: I1212 18:45:03.321994 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.322419 kubelet[3157]: I1212 18:45:03.322009 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.322419 kubelet[3157]: I1212 18:45:03.322024 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.322419 kubelet[3157]: I1212 18:45:03.322039 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.323862 kubelet[3157]: I1212 18:45:03.322559 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.323862 kubelet[3157]: I1212 18:45:03.322593 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.323862 kubelet[3157]: I1212 18:45:03.322614 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-hostproc" (OuterVolumeSpecName: "hostproc") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.325845 kubelet[3157]: I1212 18:45:03.325803 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:45:03.325986 kubelet[3157]: I1212 18:45:03.325965 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:45:03.328349 kubelet[3157]: I1212 18:45:03.328316 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-kube-api-access-7x57k" (OuterVolumeSpecName: "kube-api-access-7x57k") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "kube-api-access-7x57k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:45:03.328349 kubelet[3157]: I1212 18:45:03.328316 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-kube-api-access-zc77q" (OuterVolumeSpecName: "kube-api-access-zc77q") pod "f5c0af83-6dc7-4107-8a9e-0a3cda938f36" (UID: "f5c0af83-6dc7-4107-8a9e-0a3cda938f36"). InnerVolumeSpecName "kube-api-access-zc77q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:45:03.330983 kubelet[3157]: I1212 18:45:03.330846 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5c0af83-6dc7-4107-8a9e-0a3cda938f36" (UID: "f5c0af83-6dc7-4107-8a9e-0a3cda938f36"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:45:03.330983 kubelet[3157]: I1212 18:45:03.330909 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cni-path" (OuterVolumeSpecName: "cni-path") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.330983 kubelet[3157]: I1212 18:45:03.330932 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:45:03.331494 kubelet[3157]: I1212 18:45:03.331470 3157 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27074df-f036-4c0a-8771-c75dc95cf26a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b27074df-f036-4c0a-8771-c75dc95cf26a" (UID: "b27074df-f036-4c0a-8771-c75dc95cf26a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:45:03.422788 kubelet[3157]: I1212 18:45:03.422758 3157 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-hubble-tls\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422788 kubelet[3157]: I1212 18:45:03.422784 3157 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc77q\" (UniqueName: \"kubernetes.io/projected/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-kube-api-access-zc77q\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422788 kubelet[3157]: I1212 18:45:03.422794 3157 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-net\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422788 kubelet[3157]: I1212 18:45:03.422804 3157 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7x57k\" (UniqueName: \"kubernetes.io/projected/b27074df-f036-4c0a-8771-c75dc95cf26a-kube-api-access-7x57k\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422813 3157 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-lib-modules\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422821 3157 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b27074df-f036-4c0a-8771-c75dc95cf26a-clustermesh-secrets\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422842 3157 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5c0af83-6dc7-4107-8a9e-0a3cda938f36-cilium-config-path\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422852 3157 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cni-path\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422861 3157 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-bpf-maps\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422868 3157 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-cgroup\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422877 3157 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-etc-cni-netd\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.422997 kubelet[3157]: I1212 18:45:03.422885 3157 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-host-proc-sys-kernel\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.423118 kubelet[3157]: I1212 18:45:03.422894 3157 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-run\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.423118 kubelet[3157]: I1212 18:45:03.422903 3157 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-xtables-lock\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.423118 kubelet[3157]: I1212 18:45:03.422912 3157 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b27074df-f036-4c0a-8771-c75dc95cf26a-hostproc\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.423118 kubelet[3157]: I1212 18:45:03.422920 3157 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b27074df-f036-4c0a-8771-c75dc95cf26a-cilium-config-path\") on node \"ci-4459.2.2-a-7f4437f45c\" DevicePath \"\"" Dec 12 18:45:03.665280 kubelet[3157]: I1212 18:45:03.665249 3157 scope.go:117] "RemoveContainer" containerID="9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff" Dec 12 18:45:03.669536 containerd[1725]: time="2025-12-12T18:45:03.669326891Z" level=info msg="RemoveContainer for \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\"" Dec 12 18:45:03.674896 systemd[1]: Removed slice kubepods-burstable-podb27074df_f036_4c0a_8771_c75dc95cf26a.slice - libcontainer container kubepods-burstable-podb27074df_f036_4c0a_8771_c75dc95cf26a.slice. Dec 12 18:45:03.675009 systemd[1]: kubepods-burstable-podb27074df_f036_4c0a_8771_c75dc95cf26a.slice: Consumed 6.046s CPU time, 125.4M memory peak, 128K read from disk, 13.3M written to disk. Dec 12 18:45:03.678702 systemd[1]: Removed slice kubepods-besteffort-podf5c0af83_6dc7_4107_8a9e_0a3cda938f36.slice - libcontainer container kubepods-besteffort-podf5c0af83_6dc7_4107_8a9e_0a3cda938f36.slice. Dec 12 18:45:03.680480 containerd[1725]: time="2025-12-12T18:45:03.680452460Z" level=info msg="RemoveContainer for \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" returns successfully" Dec 12 18:45:03.681459 kubelet[3157]: I1212 18:45:03.681382 3157 scope.go:117] "RemoveContainer" containerID="e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1" Dec 12 18:45:03.684107 containerd[1725]: time="2025-12-12T18:45:03.684043635Z" level=info msg="RemoveContainer for \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\"" Dec 12 18:45:03.696539 containerd[1725]: time="2025-12-12T18:45:03.696502766Z" level=info msg="RemoveContainer for \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\" returns successfully" Dec 12 18:45:03.696948 kubelet[3157]: I1212 18:45:03.696903 3157 scope.go:117] "RemoveContainer" containerID="e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b" Dec 12 18:45:03.703130 containerd[1725]: time="2025-12-12T18:45:03.703059747Z" level=info msg="RemoveContainer for \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\"" Dec 12 18:45:03.714892 containerd[1725]: time="2025-12-12T18:45:03.714848913Z" level=info msg="RemoveContainer for \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\" returns successfully" Dec 12 18:45:03.715050 kubelet[3157]: I1212 18:45:03.715007 3157 scope.go:117] "RemoveContainer" containerID="54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe" Dec 12 18:45:03.716307 containerd[1725]: time="2025-12-12T18:45:03.716273382Z" level=info msg="RemoveContainer for \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\"" Dec 12 18:45:03.723924 containerd[1725]: time="2025-12-12T18:45:03.723898557Z" level=info msg="RemoveContainer for \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\" returns successfully" Dec 12 18:45:03.724060 kubelet[3157]: I1212 18:45:03.724040 3157 scope.go:117] "RemoveContainer" containerID="4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8" Dec 12 18:45:03.725223 containerd[1725]: time="2025-12-12T18:45:03.725189048Z" level=info msg="RemoveContainer for \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\"" Dec 12 18:45:03.733146 containerd[1725]: time="2025-12-12T18:45:03.733116905Z" level=info msg="RemoveContainer for \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\" returns successfully" Dec 12 18:45:03.733305 kubelet[3157]: I1212 18:45:03.733263 3157 scope.go:117] "RemoveContainer" containerID="9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff" Dec 12 18:45:03.733493 containerd[1725]: time="2025-12-12T18:45:03.733464821Z" level=error msg="ContainerStatus for \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\": not found" Dec 12 18:45:03.733647 kubelet[3157]: E1212 18:45:03.733624 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\": not found" containerID="9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff" Dec 12 18:45:03.733753 kubelet[3157]: I1212 18:45:03.733649 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff"} err="failed to get container status \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c0f6fdc166002d0e72873fb83fa7170f978c3ff9e37abc9f34a6c192ef3d7ff\": not found" Dec 12 18:45:03.733753 kubelet[3157]: I1212 18:45:03.733750 3157 scope.go:117] "RemoveContainer" containerID="e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1" Dec 12 18:45:03.733948 containerd[1725]: time="2025-12-12T18:45:03.733921449Z" level=error msg="ContainerStatus for \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\": not found" Dec 12 18:45:03.734032 kubelet[3157]: E1212 18:45:03.734012 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\": not found" containerID="e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1" Dec 12 18:45:03.734076 kubelet[3157]: I1212 18:45:03.734034 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1"} err="failed to get container status \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e37249c40b01b27593c3558bd95542da9c5eacbac3958d3c8a68ab2830e8a5a1\": not found" Dec 12 18:45:03.734076 kubelet[3157]: I1212 18:45:03.734051 3157 scope.go:117] "RemoveContainer" containerID="e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b" Dec 12 18:45:03.734199 containerd[1725]: time="2025-12-12T18:45:03.734175468Z" level=error msg="ContainerStatus for \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\": not found" Dec 12 18:45:03.734290 kubelet[3157]: E1212 18:45:03.734274 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\": not found" containerID="e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b" Dec 12 18:45:03.734364 kubelet[3157]: I1212 18:45:03.734292 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b"} err="failed to get container status \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9bff6d7c43e455a84fe4727c18a08118860e6d8329f2a98b4575287edfccc7b\": not found" Dec 12 18:45:03.734364 kubelet[3157]: I1212 18:45:03.734308 3157 scope.go:117] "RemoveContainer" containerID="54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe" Dec 12 18:45:03.734551 containerd[1725]: time="2025-12-12T18:45:03.734519823Z" level=error msg="ContainerStatus for \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\": not found" Dec 12 18:45:03.734648 kubelet[3157]: E1212 18:45:03.734632 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\": not found" containerID="54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe" Dec 12 18:45:03.734688 kubelet[3157]: I1212 18:45:03.734650 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe"} err="failed to get container status \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"54d03fe35a892bdab45f78de3a7aa53aa04c8f2a6d98acea0d27dbb34e0691fe\": not found" Dec 12 18:45:03.734688 kubelet[3157]: I1212 18:45:03.734664 3157 scope.go:117] "RemoveContainer" containerID="4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8" Dec 12 18:45:03.734847 containerd[1725]: time="2025-12-12T18:45:03.734796061Z" level=error msg="ContainerStatus for \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\": not found" Dec 12 18:45:03.734939 kubelet[3157]: E1212 18:45:03.734921 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\": not found" containerID="4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8" Dec 12 18:45:03.734988 kubelet[3157]: I1212 18:45:03.734970 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8"} err="failed to get container status \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4221601a5242fe23194fa1ab933e0f0853911a5245bd12a3c09952feeced70d8\": not found" Dec 12 18:45:03.735021 kubelet[3157]: I1212 18:45:03.734988 3157 scope.go:117] "RemoveContainer" containerID="40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d" Dec 12 18:45:03.737380 containerd[1725]: time="2025-12-12T18:45:03.736990179Z" level=info msg="RemoveContainer for \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\"" Dec 12 18:45:03.745754 containerd[1725]: time="2025-12-12T18:45:03.745721883Z" level=info msg="RemoveContainer for \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" returns successfully" Dec 12 18:45:03.746014 kubelet[3157]: I1212 18:45:03.745976 3157 scope.go:117] "RemoveContainer" containerID="40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d" Dec 12 18:45:03.746206 containerd[1725]: time="2025-12-12T18:45:03.746178965Z" level=error msg="ContainerStatus for \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\": not found" Dec 12 18:45:03.746308 kubelet[3157]: E1212 18:45:03.746286 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\": not found" containerID="40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d" Dec 12 18:45:03.746361 kubelet[3157]: I1212 18:45:03.746313 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d"} err="failed to get container status \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\": rpc error: code = NotFound desc = an error occurred when try to find container \"40925059636351ceff24cddd0de23efb1067798516a68a867371b751c2b2915d\": not found" Dec 12 18:45:03.987639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5f61c143b7f6a8952106e3ff2bcd4f84f848cde8d9d7b7f9ea4fef79f608ee6-shm.mount: Deactivated successfully. Dec 12 18:45:03.987739 systemd[1]: var-lib-kubelet-pods-f5c0af83\x2d6dc7\x2d4107\x2d8a9e\x2d0a3cda938f36-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzc77q.mount: Deactivated successfully. Dec 12 18:45:03.987811 systemd[1]: var-lib-kubelet-pods-b27074df\x2df036\x2d4c0a\x2d8771\x2dc75dc95cf26a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7x57k.mount: Deactivated successfully. Dec 12 18:45:03.988331 systemd[1]: var-lib-kubelet-pods-b27074df\x2df036\x2d4c0a\x2d8771\x2dc75dc95cf26a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 18:45:03.988406 systemd[1]: var-lib-kubelet-pods-b27074df\x2df036\x2d4c0a\x2d8771\x2dc75dc95cf26a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 18:45:04.310205 kubelet[3157]: I1212 18:45:04.310115 3157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27074df-f036-4c0a-8771-c75dc95cf26a" path="/var/lib/kubelet/pods/b27074df-f036-4c0a-8771-c75dc95cf26a/volumes" Dec 12 18:45:04.310618 kubelet[3157]: I1212 18:45:04.310598 3157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c0af83-6dc7-4107-8a9e-0a3cda938f36" path="/var/lib/kubelet/pods/f5c0af83-6dc7-4107-8a9e-0a3cda938f36/volumes" Dec 12 18:45:04.420930 kubelet[3157]: E1212 18:45:04.420878 3157 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 18:45:04.986103 sshd[4699]: Connection closed by 10.200.16.10 port 46004 Dec 12 18:45:04.986443 sshd-session[4696]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:04.990521 systemd[1]: sshd@21-10.200.4.12:22-10.200.16.10:46004.service: Deactivated successfully. Dec 12 18:45:04.992743 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:45:04.994166 systemd-logind[1691]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:45:04.995722 systemd-logind[1691]: Removed session 24. Dec 12 18:45:05.095894 systemd[1]: Started sshd@22-10.200.4.12:22-10.200.16.10:46008.service - OpenSSH per-connection server daemon (10.200.16.10:46008). Dec 12 18:45:05.694683 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 46008 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:45:05.695927 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:05.700475 systemd-logind[1691]: New session 25 of user core. Dec 12 18:45:05.706993 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:45:06.587575 kubelet[3157]: I1212 18:45:06.586191 3157 memory_manager.go:355] "RemoveStaleState removing state" podUID="f5c0af83-6dc7-4107-8a9e-0a3cda938f36" containerName="cilium-operator" Dec 12 18:45:06.587575 kubelet[3157]: I1212 18:45:06.586224 3157 memory_manager.go:355] "RemoveStaleState removing state" podUID="b27074df-f036-4c0a-8771-c75dc95cf26a" containerName="cilium-agent" Dec 12 18:45:06.601184 systemd[1]: Created slice kubepods-burstable-pod34f81064_d15d_42b8_91d0_dce4377128d9.slice - libcontainer container kubepods-burstable-pod34f81064_d15d_42b8_91d0_dce4377128d9.slice. Dec 12 18:45:06.681448 sshd[4845]: Connection closed by 10.200.16.10 port 46008 Dec 12 18:45:06.681949 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:06.685793 systemd[1]: sshd@22-10.200.4.12:22-10.200.16.10:46008.service: Deactivated successfully. Dec 12 18:45:06.690044 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:45:06.691979 systemd-logind[1691]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:45:06.695974 systemd-logind[1691]: Removed session 25. Dec 12 18:45:06.738777 kubelet[3157]: I1212 18:45:06.738746 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fvqp\" (UniqueName: \"kubernetes.io/projected/34f81064-d15d-42b8-91d0-dce4377128d9-kube-api-access-6fvqp\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.738901 kubelet[3157]: I1212 18:45:06.738786 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34f81064-d15d-42b8-91d0-dce4377128d9-clustermesh-secrets\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.738901 kubelet[3157]: I1212 18:45:06.738808 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-host-proc-sys-net\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.738901 kubelet[3157]: I1212 18:45:06.738845 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-host-proc-sys-kernel\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.738901 kubelet[3157]: I1212 18:45:06.738865 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-cilium-cgroup\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.738901 kubelet[3157]: I1212 18:45:06.738883 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-cni-path\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739038 kubelet[3157]: I1212 18:45:06.738903 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34f81064-d15d-42b8-91d0-dce4377128d9-cilium-config-path\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739038 kubelet[3157]: I1212 18:45:06.738925 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-lib-modules\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739038 kubelet[3157]: I1212 18:45:06.738946 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-bpf-maps\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739038 kubelet[3157]: I1212 18:45:06.738965 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34f81064-d15d-42b8-91d0-dce4377128d9-hubble-tls\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739038 kubelet[3157]: I1212 18:45:06.738998 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-cilium-run\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739038 kubelet[3157]: I1212 18:45:06.739018 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/34f81064-d15d-42b8-91d0-dce4377128d9-cilium-ipsec-secrets\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739189 kubelet[3157]: I1212 18:45:06.739037 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-hostproc\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739189 kubelet[3157]: I1212 18:45:06.739064 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-etc-cni-netd\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.739189 kubelet[3157]: I1212 18:45:06.739088 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34f81064-d15d-42b8-91d0-dce4377128d9-xtables-lock\") pod \"cilium-cwskh\" (UID: \"34f81064-d15d-42b8-91d0-dce4377128d9\") " pod="kube-system/cilium-cwskh" Dec 12 18:45:06.785041 systemd[1]: Started sshd@23-10.200.4.12:22-10.200.16.10:46014.service - OpenSSH per-connection server daemon (10.200.16.10:46014). Dec 12 18:45:06.905959 containerd[1725]: time="2025-12-12T18:45:06.905808291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwskh,Uid:34f81064-d15d-42b8-91d0-dce4377128d9,Namespace:kube-system,Attempt:0,}" Dec 12 18:45:06.955311 containerd[1725]: time="2025-12-12T18:45:06.955275182Z" level=info msg="connecting to shim 8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1" address="unix:///run/containerd/s/3a87acec884214e6781c1c0392593e50ced022bd30f9d2ea0ca68195a0321c1d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:45:06.979012 systemd[1]: Started cri-containerd-8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1.scope - libcontainer container 8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1. Dec 12 18:45:07.007374 containerd[1725]: time="2025-12-12T18:45:07.007342068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwskh,Uid:34f81064-d15d-42b8-91d0-dce4377128d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\"" Dec 12 18:45:07.010521 containerd[1725]: time="2025-12-12T18:45:07.010265055Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:45:07.027796 containerd[1725]: time="2025-12-12T18:45:07.027762310Z" level=info msg="Container 7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:07.041464 containerd[1725]: time="2025-12-12T18:45:07.041437512Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a\"" Dec 12 18:45:07.042008 containerd[1725]: time="2025-12-12T18:45:07.041985470Z" level=info msg="StartContainer for \"7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a\"" Dec 12 18:45:07.043257 containerd[1725]: time="2025-12-12T18:45:07.043224307Z" level=info msg="connecting to shim 7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a" address="unix:///run/containerd/s/3a87acec884214e6781c1c0392593e50ced022bd30f9d2ea0ca68195a0321c1d" protocol=ttrpc version=3 Dec 12 18:45:07.064008 systemd[1]: Started cri-containerd-7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a.scope - libcontainer container 7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a. Dec 12 18:45:07.095400 containerd[1725]: time="2025-12-12T18:45:07.095328499Z" level=info msg="StartContainer for \"7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a\" returns successfully" Dec 12 18:45:07.099755 systemd[1]: cri-containerd-7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a.scope: Deactivated successfully. Dec 12 18:45:07.101514 containerd[1725]: time="2025-12-12T18:45:07.101471515Z" level=info msg="received container exit event container_id:\"7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a\" id:\"7af674d640f2a469900fb0230f9afc96dd8c283c4b473176dc5fd3cebb3bde6a\" pid:4919 exited_at:{seconds:1765565107 nanos:101127636}" Dec 12 18:45:07.391361 sshd[4855]: Accepted publickey for core from 10.200.16.10 port 46014 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:45:07.392560 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:07.396865 systemd-logind[1691]: New session 26 of user core. Dec 12 18:45:07.403977 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:45:07.693266 containerd[1725]: time="2025-12-12T18:45:07.693146180Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:45:07.711846 containerd[1725]: time="2025-12-12T18:45:07.711533095Z" level=info msg="Container d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:07.729072 containerd[1725]: time="2025-12-12T18:45:07.729042310Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838\"" Dec 12 18:45:07.729471 containerd[1725]: time="2025-12-12T18:45:07.729451076Z" level=info msg="StartContainer for \"d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838\"" Dec 12 18:45:07.731204 containerd[1725]: time="2025-12-12T18:45:07.731170166Z" level=info msg="connecting to shim d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838" address="unix:///run/containerd/s/3a87acec884214e6781c1c0392593e50ced022bd30f9d2ea0ca68195a0321c1d" protocol=ttrpc version=3 Dec 12 18:45:07.748040 systemd[1]: Started cri-containerd-d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838.scope - libcontainer container d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838. Dec 12 18:45:07.784325 containerd[1725]: time="2025-12-12T18:45:07.784301363Z" level=info msg="StartContainer for \"d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838\" returns successfully" Dec 12 18:45:07.785085 systemd[1]: cri-containerd-d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838.scope: Deactivated successfully. Dec 12 18:45:07.787235 containerd[1725]: time="2025-12-12T18:45:07.787205321Z" level=info msg="received container exit event container_id:\"d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838\" id:\"d80729e99643b7c7738946ad1065d29acbc901e2300f462e551b404911717838\" pid:4970 exited_at:{seconds:1765565107 nanos:786920762}" Dec 12 18:45:07.833028 sshd[4954]: Connection closed by 10.200.16.10 port 46014 Dec 12 18:45:07.833571 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:07.836903 systemd[1]: sshd@23-10.200.4.12:22-10.200.16.10:46014.service: Deactivated successfully. Dec 12 18:45:07.838596 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:45:07.839331 systemd-logind[1691]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:45:07.841175 systemd-logind[1691]: Removed session 26. Dec 12 18:45:07.938086 systemd[1]: Started sshd@24-10.200.4.12:22-10.200.16.10:46016.service - OpenSSH per-connection server daemon (10.200.16.10:46016). Dec 12 18:45:08.173948 kubelet[3157]: I1212 18:45:08.170890 3157 setters.go:602] "Node became not ready" node="ci-4459.2.2-a-7f4437f45c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T18:45:08Z","lastTransitionTime":"2025-12-12T18:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 12 18:45:08.536728 sshd[5006]: Accepted publickey for core from 10.200.16.10 port 46016 ssh2: RSA SHA256:QSRckynyvfgnUcJAhE6+d/YoEeI2sBPOet2KOiKPENk Dec 12 18:45:08.537941 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:08.541914 systemd-logind[1691]: New session 27 of user core. Dec 12 18:45:08.547973 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 18:45:08.693312 containerd[1725]: time="2025-12-12T18:45:08.693273959Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:45:08.725415 containerd[1725]: time="2025-12-12T18:45:08.723537803Z" level=info msg="Container 4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:08.729300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211100294.mount: Deactivated successfully. Dec 12 18:45:08.740270 containerd[1725]: time="2025-12-12T18:45:08.740238993Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94\"" Dec 12 18:45:08.741533 containerd[1725]: time="2025-12-12T18:45:08.740822662Z" level=info msg="StartContainer for \"4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94\"" Dec 12 18:45:08.742510 containerd[1725]: time="2025-12-12T18:45:08.742472508Z" level=info msg="connecting to shim 4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94" address="unix:///run/containerd/s/3a87acec884214e6781c1c0392593e50ced022bd30f9d2ea0ca68195a0321c1d" protocol=ttrpc version=3 Dec 12 18:45:08.768178 systemd[1]: Started cri-containerd-4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94.scope - libcontainer container 4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94. Dec 12 18:45:08.834544 systemd[1]: cri-containerd-4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94.scope: Deactivated successfully. Dec 12 18:45:08.838400 containerd[1725]: time="2025-12-12T18:45:08.838372977Z" level=info msg="received container exit event container_id:\"4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94\" id:\"4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94\" pid:5025 exited_at:{seconds:1765565108 nanos:838248835}" Dec 12 18:45:08.839702 containerd[1725]: time="2025-12-12T18:45:08.839660021Z" level=info msg="StartContainer for \"4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94\" returns successfully" Dec 12 18:45:08.860726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f2d49a8a4e3f2f5df4eba6687404b34905deb294566d1c7e3ecc8119f396b94-rootfs.mount: Deactivated successfully. Dec 12 18:45:09.422033 kubelet[3157]: E1212 18:45:09.421991 3157 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 18:45:09.699050 containerd[1725]: time="2025-12-12T18:45:09.698910344Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:45:09.729859 containerd[1725]: time="2025-12-12T18:45:09.728528883Z" level=info msg="Container 9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:09.733976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794465192.mount: Deactivated successfully. Dec 12 18:45:09.746219 containerd[1725]: time="2025-12-12T18:45:09.746189502Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03\"" Dec 12 18:45:09.746672 containerd[1725]: time="2025-12-12T18:45:09.746650539Z" level=info msg="StartContainer for \"9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03\"" Dec 12 18:45:09.748188 containerd[1725]: time="2025-12-12T18:45:09.748161336Z" level=info msg="connecting to shim 9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03" address="unix:///run/containerd/s/3a87acec884214e6781c1c0392593e50ced022bd30f9d2ea0ca68195a0321c1d" protocol=ttrpc version=3 Dec 12 18:45:09.769002 systemd[1]: Started cri-containerd-9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03.scope - libcontainer container 9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03. Dec 12 18:45:09.792076 systemd[1]: cri-containerd-9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03.scope: Deactivated successfully. Dec 12 18:45:09.797903 containerd[1725]: time="2025-12-12T18:45:09.797809099Z" level=info msg="received container exit event container_id:\"9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03\" id:\"9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03\" pid:5072 exited_at:{seconds:1765565109 nanos:793151132}" Dec 12 18:45:09.804395 containerd[1725]: time="2025-12-12T18:45:09.804319109Z" level=info msg="StartContainer for \"9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03\" returns successfully" Dec 12 18:45:09.859201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d67570dddbeac8a207bea5ce20d626b722dfd15e35b424c82abf909e2c0ab03-rootfs.mount: Deactivated successfully. Dec 12 18:45:10.705552 containerd[1725]: time="2025-12-12T18:45:10.705473676Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:45:10.736866 containerd[1725]: time="2025-12-12T18:45:10.735486551Z" level=info msg="Container 755fcb47cfa822ceb96bed5dfe530106a6f9082751f504ffc04d4c471dc11857: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:10.751911 containerd[1725]: time="2025-12-12T18:45:10.751874942Z" level=info msg="CreateContainer within sandbox \"8570f35a34cccaaacd7fe06555e55dce8984fdbd7300ea5c2b4b2f6d22dface1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"755fcb47cfa822ceb96bed5dfe530106a6f9082751f504ffc04d4c471dc11857\"" Dec 12 18:45:10.752874 containerd[1725]: time="2025-12-12T18:45:10.752363123Z" level=info msg="StartContainer for \"755fcb47cfa822ceb96bed5dfe530106a6f9082751f504ffc04d4c471dc11857\"" Dec 12 18:45:10.753910 containerd[1725]: time="2025-12-12T18:45:10.753794198Z" level=info msg="connecting to shim 755fcb47cfa822ceb96bed5dfe530106a6f9082751f504ffc04d4c471dc11857" address="unix:///run/containerd/s/3a87acec884214e6781c1c0392593e50ced022bd30f9d2ea0ca68195a0321c1d" protocol=ttrpc version=3 Dec 12 18:45:10.780987 systemd[1]: Started cri-containerd-755fcb47cfa822ceb96bed5dfe530106a6f9082751f504ffc04d4c471dc11857.scope - libcontainer container 755fcb47cfa822ceb96bed5dfe530106a6f9082751f504ffc04d4c471dc11857. Dec 12 18:45:10.822225 containerd[1725]: time="2025-12-12T18:45:10.822129486Z" level=info msg="StartContainer for \"755fcb47cfa822ceb96bed5dfe530106a6f9082751f504ffc04d4c471dc11857\" returns successfully" Dec 12 18:45:11.232917 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Dec 12 18:45:13.947263 systemd-networkd[1344]: lxc_health: Link UP Dec 12 18:45:13.950368 systemd-networkd[1344]: lxc_health: Gained carrier Dec 12 18:45:14.932122 kubelet[3157]: I1212 18:45:14.931676 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cwskh" podStartSLOduration=8.931657503 podStartE2EDuration="8.931657503s" podCreationTimestamp="2025-12-12 18:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:45:11.726536907 +0000 UTC m=+157.518922563" watchObservedRunningTime="2025-12-12 18:45:14.931657503 +0000 UTC m=+160.724043156" Dec 12 18:45:15.374999 systemd-networkd[1344]: lxc_health: Gained IPv6LL Dec 12 18:45:17.414385 kubelet[3157]: E1212 18:45:17.414331 3157 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46354->127.0.0.1:36327: write tcp 127.0.0.1:46354->127.0.0.1:36327: write: broken pipe Dec 12 18:45:17.627856 update_engine[1692]: I20251212 18:45:17.627501 1692 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 18:45:17.627856 update_engine[1692]: I20251212 18:45:17.627558 1692 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 18:45:17.627856 update_engine[1692]: I20251212 18:45:17.627736 1692 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 18:45:17.628300 update_engine[1692]: I20251212 18:45:17.628274 1692 omaha_request_params.cc:62] Current group set to stable Dec 12 18:45:17.628475 update_engine[1692]: I20251212 18:45:17.628458 1692 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 18:45:17.628591 update_engine[1692]: I20251212 18:45:17.628578 1692 update_attempter.cc:643] Scheduling an action processor start. Dec 12 18:45:17.628692 update_engine[1692]: I20251212 18:45:17.628679 1692 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 18:45:17.628896 update_engine[1692]: I20251212 18:45:17.628881 1692 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 18:45:17.630180 update_engine[1692]: I20251212 18:45:17.629092 1692 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 18:45:17.630180 update_engine[1692]: I20251212 18:45:17.629105 1692 omaha_request_action.cc:272] Request: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: Dec 12 18:45:17.630180 update_engine[1692]: I20251212 18:45:17.629112 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:45:17.631519 update_engine[1692]: I20251212 18:45:17.631487 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:45:17.632355 update_engine[1692]: I20251212 18:45:17.632323 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:45:17.632496 locksmithd[1752]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 18:45:17.691073 update_engine[1692]: E20251212 18:45:17.690965 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 18:45:17.691754 update_engine[1692]: I20251212 18:45:17.691722 1692 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 12 18:45:19.626175 sshd[5010]: Connection closed by 10.200.16.10 port 46016 Dec 12 18:45:19.626818 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:19.631016 systemd[1]: sshd@24-10.200.4.12:22-10.200.16.10:46016.service: Deactivated successfully. Dec 12 18:45:19.633158 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 18:45:19.634002 systemd-logind[1691]: Session 27 logged out. Waiting for processes to exit. Dec 12 18:45:19.635567 systemd-logind[1691]: Removed session 27.