Dec 16 13:05:53.859843 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:05:53.859872 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:05:53.859886 kernel: BIOS-provided physical RAM map: Dec 16 13:05:53.859898 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:05:53.859906 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:05:53.859915 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:05:53.859924 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:05:53.859933 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:05:53.859942 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:05:53.859951 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:05:53.859959 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 16 13:05:53.859967 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 16 13:05:53.859980 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 16 13:05:53.859989 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 16 13:05:53.860000 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 16 13:05:53.860010 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 16 13:05:53.860019 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 16 13:05:53.860031 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 16 13:05:53.860040 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 16 13:05:53.860050 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 16 13:05:53.860059 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 16 13:05:53.860069 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 16 13:05:53.860078 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:05:53.860088 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:05:53.860097 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:05:53.860107 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:05:53.860116 kernel: NX (Execute Disable) protection: active Dec 16 13:05:53.860125 kernel: APIC: Static calls initialized Dec 16 13:05:53.860137 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Dec 16 13:05:53.860147 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Dec 16 13:05:53.860156 kernel: extended physical RAM map: Dec 16 13:05:53.860166 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:05:53.860176 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:05:53.860185 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:05:53.860194 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:05:53.860203 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:05:53.860212 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:05:53.860231 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:05:53.860240 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Dec 16 13:05:53.860253 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Dec 16 13:05:53.860266 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Dec 16 13:05:53.860276 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Dec 16 13:05:53.860285 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Dec 16 13:05:53.860296 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 16 13:05:53.860308 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 16 13:05:53.860318 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 16 13:05:53.860328 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 16 13:05:53.860338 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 16 13:05:53.860348 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 16 13:05:53.860357 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 16 13:05:53.860367 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 16 13:05:53.860377 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 16 13:05:53.860387 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 16 13:05:53.860397 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 16 13:05:53.860407 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:05:53.860419 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:05:53.860429 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:05:53.860439 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:05:53.860448 kernel: efi: EFI v2.7 by EDK II Dec 16 13:05:53.860458 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Dec 16 13:05:53.860467 kernel: random: crng init done Dec 16 13:05:53.860477 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 16 13:05:53.860487 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 16 13:05:53.860497 kernel: secureboot: Secure boot disabled Dec 16 13:05:53.860507 kernel: SMBIOS 2.8 present. Dec 16 13:05:53.860516 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 13:05:53.860530 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:05:53.860539 kernel: Hypervisor detected: KVM Dec 16 13:05:53.860548 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 16 13:05:53.860558 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:05:53.860567 kernel: kvm-clock: using sched offset of 4057749257 cycles Dec 16 13:05:53.860577 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:05:53.860587 kernel: tsc: Detected 2794.748 MHz processor Dec 16 13:05:53.860671 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:05:53.860683 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:05:53.860693 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 16 13:05:53.860703 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:05:53.860717 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:05:53.860727 kernel: Using GB pages for direct mapping Dec 16 13:05:53.860736 kernel: ACPI: Early table checksum verification disabled Dec 16 13:05:53.860746 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 16 13:05:53.860767 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 13:05:53.860791 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:05:53.860817 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:05:53.860827 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 16 13:05:53.860837 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:05:53.860851 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:05:53.860861 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:05:53.860871 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:05:53.860885 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 13:05:53.860895 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 16 13:05:53.860905 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Dec 16 13:05:53.860915 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 16 13:05:53.860925 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 16 13:05:53.860935 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 16 13:05:53.860948 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 16 13:05:53.860958 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 16 13:05:53.860968 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 16 13:05:53.860978 kernel: No NUMA configuration found Dec 16 13:05:53.860988 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 16 13:05:53.860998 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Dec 16 13:05:53.861008 kernel: Zone ranges: Dec 16 13:05:53.861018 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:05:53.861028 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 16 13:05:53.861041 kernel: Normal empty Dec 16 13:05:53.861051 kernel: Device empty Dec 16 13:05:53.861061 kernel: Movable zone start for each node Dec 16 13:05:53.861071 kernel: Early memory node ranges Dec 16 13:05:53.861081 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:05:53.861091 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 16 13:05:53.861100 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 16 13:05:53.861110 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 16 13:05:53.861120 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 16 13:05:53.861130 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 16 13:05:53.861143 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Dec 16 13:05:53.861153 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Dec 16 13:05:53.861163 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 16 13:05:53.861173 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:05:53.861192 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:05:53.861205 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 16 13:05:53.861226 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:05:53.861237 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 16 13:05:53.861247 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 16 13:05:53.861257 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:05:53.861267 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 13:05:53.861278 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 16 13:05:53.861292 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:05:53.861302 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:05:53.861312 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:05:53.861323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:05:53.861333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:05:53.861347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:05:53.861357 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:05:53.861367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:05:53.861377 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:05:53.861388 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:05:53.861398 kernel: TSC deadline timer available Dec 16 13:05:53.861409 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:05:53.861419 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:05:53.861429 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:05:53.861443 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:05:53.861453 kernel: CPU topo: Num. cores per package: 4 Dec 16 13:05:53.861464 kernel: CPU topo: Num. threads per package: 4 Dec 16 13:05:53.861474 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 13:05:53.861485 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:05:53.861495 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:05:53.861505 kernel: kvm-guest: setup PV sched yield Dec 16 13:05:53.861516 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 13:05:53.861526 kernel: Booting paravirtualized kernel on KVM Dec 16 13:05:53.861540 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:05:53.861551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 13:05:53.861561 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 13:05:53.861572 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 13:05:53.861582 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 13:05:53.861592 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:05:53.861602 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:05:53.861615 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:05:53.861629 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:05:53.861639 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:05:53.861661 kernel: Fallback order for Node 0: 0 Dec 16 13:05:53.861679 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Dec 16 13:05:53.861690 kernel: Policy zone: DMA32 Dec 16 13:05:53.861701 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:05:53.861711 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 13:05:53.861725 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:05:53.861736 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:05:53.861750 kernel: Dynamic Preempt: voluntary Dec 16 13:05:53.861760 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:05:53.861786 kernel: rcu: RCU event tracing is enabled. Dec 16 13:05:53.861798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 13:05:53.861808 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:05:53.861818 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:05:53.861828 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:05:53.861838 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:05:53.861849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 13:05:53.861859 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:05:53.861875 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:05:53.861885 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:05:53.861896 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 13:05:53.861907 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:05:53.861918 kernel: Console: colour dummy device 80x25 Dec 16 13:05:53.861936 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:05:53.861955 kernel: ACPI: Core revision 20240827 Dec 16 13:05:53.861971 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:05:53.861988 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:05:53.862011 kernel: x2apic enabled Dec 16 13:05:53.862029 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:05:53.862047 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:05:53.862065 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:05:53.862082 kernel: kvm-guest: setup PV IPIs Dec 16 13:05:53.862100 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:05:53.862118 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 13:05:53.862136 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 16 13:05:53.862154 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:05:53.862177 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:05:53.862195 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:05:53.862222 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:05:53.862238 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:05:53.862256 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:05:53.862274 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 13:05:53.862292 kernel: active return thunk: retbleed_return_thunk Dec 16 13:05:53.862310 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 13:05:53.862333 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:05:53.862351 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:05:53.862369 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:05:53.862387 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:05:53.862405 kernel: active return thunk: srso_return_thunk Dec 16 13:05:53.862423 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:05:53.862441 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:05:53.862458 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:05:53.862477 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:05:53.862499 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:05:53.862517 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 13:05:53.862536 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:05:53.862553 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:05:53.862571 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:05:53.862589 kernel: landlock: Up and running. Dec 16 13:05:53.862606 kernel: SELinux: Initializing. Dec 16 13:05:53.862624 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:05:53.862642 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:05:53.862665 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 13:05:53.862683 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:05:53.862701 kernel: ... version: 0 Dec 16 13:05:53.862719 kernel: ... bit width: 48 Dec 16 13:05:53.862737 kernel: ... generic registers: 6 Dec 16 13:05:53.862755 kernel: ... value mask: 0000ffffffffffff Dec 16 13:05:53.862798 kernel: ... max period: 00007fffffffffff Dec 16 13:05:53.862818 kernel: ... fixed-purpose events: 0 Dec 16 13:05:53.862835 kernel: ... event mask: 000000000000003f Dec 16 13:05:53.862858 kernel: signal: max sigframe size: 1776 Dec 16 13:05:53.862879 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:05:53.862899 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:05:53.862917 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:05:53.862935 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:05:53.862953 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:05:53.862970 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 13:05:53.862988 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 13:05:53.863006 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 16 13:05:53.863029 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145388K reserved, 0K cma-reserved) Dec 16 13:05:53.863047 kernel: devtmpfs: initialized Dec 16 13:05:53.863065 kernel: x86/mm: Memory block size: 128MB Dec 16 13:05:53.863083 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 16 13:05:53.863101 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 16 13:05:53.863119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 16 13:05:53.863138 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 16 13:05:53.863156 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Dec 16 13:05:53.863174 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 16 13:05:53.863196 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:05:53.863222 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 13:05:53.863236 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:05:53.863246 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:05:53.863254 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:05:53.863262 kernel: audit: type=2000 audit(1765890351.891:1): state=initialized audit_enabled=0 res=1 Dec 16 13:05:53.863269 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:05:53.863277 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:05:53.863284 kernel: cpuidle: using governor menu Dec 16 13:05:53.863296 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:05:53.863303 kernel: dca service started, version 1.12.1 Dec 16 13:05:53.863311 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 13:05:53.863319 kernel: PCI: Using configuration type 1 for base access Dec 16 13:05:53.863326 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:05:53.863334 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:05:53.863341 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:05:53.863349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:05:53.863358 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:05:53.863366 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:05:53.863373 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:05:53.863381 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:05:53.863388 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:05:53.863396 kernel: ACPI: Interpreter enabled Dec 16 13:05:53.863403 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:05:53.863411 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:05:53.863419 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:05:53.863426 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:05:53.863436 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:05:53.863443 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:05:53.863614 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:05:53.863733 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:05:53.863876 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:05:53.863887 kernel: PCI host bridge to bus 0000:00 Dec 16 13:05:53.864006 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:05:53.864115 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:05:53.864230 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:05:53.864351 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 13:05:53.864456 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 13:05:53.864561 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:05:53.864665 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:05:53.864831 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:05:53.864967 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:05:53.865082 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 13:05:53.865196 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 13:05:53.865329 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:05:53.865443 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:05:53.865567 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:05:53.865687 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 13:05:53.866837 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 13:05:53.866971 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 13:05:53.867096 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:05:53.867222 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 13:05:53.867355 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 13:05:53.867471 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 13:05:53.867598 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:05:53.867714 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 13:05:53.868067 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 13:05:53.868188 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 13:05:53.868326 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 13:05:53.868452 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:05:53.868571 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:05:53.868694 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:05:53.868851 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 13:05:53.868968 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 13:05:53.869090 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:05:53.869205 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 13:05:53.869225 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:05:53.869241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:05:53.869251 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:05:53.869261 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:05:53.869271 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:05:53.869281 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:05:53.869291 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:05:53.869298 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:05:53.869306 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:05:53.869315 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:05:53.869324 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:05:53.869332 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:05:53.869340 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:05:53.869348 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:05:53.869355 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:05:53.869363 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:05:53.869371 kernel: iommu: Default domain type: Translated Dec 16 13:05:53.869379 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:05:53.869386 kernel: efivars: Registered efivars operations Dec 16 13:05:53.869396 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:05:53.869404 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:05:53.869412 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 16 13:05:53.869419 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 16 13:05:53.869427 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Dec 16 13:05:53.869435 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Dec 16 13:05:53.869442 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 16 13:05:53.869450 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 16 13:05:53.869457 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Dec 16 13:05:53.869467 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 16 13:05:53.869586 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:05:53.869700 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:05:53.869830 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:05:53.869840 kernel: vgaarb: loaded Dec 16 13:05:53.869848 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:05:53.869856 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:05:53.869864 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:05:53.869878 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:05:53.869887 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:05:53.869896 kernel: pnp: PnP ACPI init Dec 16 13:05:53.870035 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 13:05:53.870049 kernel: pnp: PnP ACPI: found 6 devices Dec 16 13:05:53.870057 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:05:53.870065 kernel: NET: Registered PF_INET protocol family Dec 16 13:05:53.870073 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:05:53.870084 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:05:53.870092 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:05:53.870100 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:05:53.870108 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:05:53.870116 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:05:53.870124 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:05:53.870132 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:05:53.870140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:05:53.870148 kernel: NET: Registered PF_XDP protocol family Dec 16 13:05:53.870285 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 13:05:53.870407 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 13:05:53.870516 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:05:53.870620 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:05:53.870724 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:05:53.870858 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 13:05:53.870964 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 13:05:53.871067 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:05:53.871081 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:05:53.871090 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 13:05:53.871100 kernel: Initialise system trusted keyrings Dec 16 13:05:53.871108 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:05:53.871116 kernel: Key type asymmetric registered Dec 16 13:05:53.871126 kernel: Asymmetric key parser 'x509' registered Dec 16 13:05:53.871135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:05:53.871143 kernel: io scheduler mq-deadline registered Dec 16 13:05:53.871151 kernel: io scheduler kyber registered Dec 16 13:05:53.871159 kernel: io scheduler bfq registered Dec 16 13:05:53.871167 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:05:53.871176 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:05:53.871184 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:05:53.871192 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 13:05:53.871202 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:05:53.871210 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:05:53.871231 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:05:53.871242 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:05:53.871252 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:05:53.871263 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:05:53.871396 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 13:05:53.871507 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 13:05:53.871621 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T13:05:53 UTC (1765890353) Dec 16 13:05:53.871729 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 13:05:53.871740 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:05:53.871748 kernel: efifb: probing for efifb Dec 16 13:05:53.871756 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 13:05:53.871764 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 13:05:53.871813 kernel: efifb: scrolling: redraw Dec 16 13:05:53.871822 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:05:53.871833 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 13:05:53.871843 kernel: fb0: EFI VGA frame buffer device Dec 16 13:05:53.871851 kernel: pstore: Using crash dump compression: deflate Dec 16 13:05:53.871859 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:05:53.871867 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:05:53.871875 kernel: Segment Routing with IPv6 Dec 16 13:05:53.871883 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:05:53.871891 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:05:53.871899 kernel: Key type dns_resolver registered Dec 16 13:05:53.871907 kernel: IPI shorthand broadcast: enabled Dec 16 13:05:53.871917 kernel: sched_clock: Marking stable (2788002227, 285957780)->(3128559834, -54599827) Dec 16 13:05:53.871925 kernel: registered taskstats version 1 Dec 16 13:05:53.871934 kernel: Loading compiled-in X.509 certificates Dec 16 13:05:53.871942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:05:53.871950 kernel: Demotion targets for Node 0: null Dec 16 13:05:53.871958 kernel: Key type .fscrypt registered Dec 16 13:05:53.871966 kernel: Key type fscrypt-provisioning registered Dec 16 13:05:53.871974 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:05:53.871982 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:05:53.871992 kernel: ima: No architecture policies found Dec 16 13:05:53.872000 kernel: clk: Disabling unused clocks Dec 16 13:05:53.872008 kernel: Warning: unable to open an initial console. Dec 16 13:05:53.872017 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:05:53.872025 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:05:53.872033 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:05:53.872041 kernel: Run /init as init process Dec 16 13:05:53.872049 kernel: with arguments: Dec 16 13:05:53.872057 kernel: /init Dec 16 13:05:53.872067 kernel: with environment: Dec 16 13:05:53.872075 kernel: HOME=/ Dec 16 13:05:53.872083 kernel: TERM=linux Dec 16 13:05:53.872092 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:05:53.872103 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:05:53.872113 systemd[1]: Detected virtualization kvm. Dec 16 13:05:53.872121 systemd[1]: Detected architecture x86-64. Dec 16 13:05:53.872129 systemd[1]: Running in initrd. Dec 16 13:05:53.872139 systemd[1]: No hostname configured, using default hostname. Dec 16 13:05:53.872148 systemd[1]: Hostname set to . Dec 16 13:05:53.872157 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:05:53.872165 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:05:53.872174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:05:53.872182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:05:53.872191 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:05:53.872200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:05:53.872210 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:05:53.872231 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:05:53.872243 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:05:53.872255 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:05:53.872266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:05:53.872278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:05:53.872286 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:05:53.872297 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:05:53.872305 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:05:53.872314 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:05:53.872322 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:05:53.872330 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:05:53.872339 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:05:53.872347 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:05:53.872356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:05:53.872366 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:05:53.872374 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:05:53.872383 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:05:53.872391 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:05:53.872399 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:05:53.872408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:05:53.872417 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:05:53.872425 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:05:53.872435 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:05:53.872445 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:05:53.872454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:53.872462 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:05:53.872471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:05:53.872502 systemd-journald[201]: Collecting audit messages is disabled. Dec 16 13:05:53.872524 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:05:53.872534 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:05:53.872544 systemd-journald[201]: Journal started Dec 16 13:05:53.872564 systemd-journald[201]: Runtime Journal (/run/log/journal/b38e3154f2214537b62c01691a8dfc63) is 6M, max 48.1M, 42.1M free. Dec 16 13:05:53.876959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:53.859657 systemd-modules-load[204]: Inserted module 'overlay' Dec 16 13:05:53.891344 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:05:53.883539 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:05:53.885866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:05:53.898292 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:05:53.886678 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:05:53.888865 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:05:53.903589 kernel: Bridge firewalling registered Dec 16 13:05:53.901588 systemd-modules-load[204]: Inserted module 'br_netfilter' Dec 16 13:05:53.907342 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:05:53.910459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:05:53.917630 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:05:53.918643 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:05:53.922874 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:05:53.931270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:05:53.933010 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:05:53.945928 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:53.949201 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:05:53.961375 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:05:54.001969 systemd-resolved[247]: Positive Trust Anchors: Dec 16 13:05:54.001983 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:05:54.002012 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:05:54.004392 systemd-resolved[247]: Defaulting to hostname 'linux'. Dec 16 13:05:54.005389 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:05:54.006154 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:05:54.064811 kernel: SCSI subsystem initialized Dec 16 13:05:54.074798 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:05:54.084796 kernel: iscsi: registered transport (tcp) Dec 16 13:05:54.106003 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:05:54.106036 kernel: QLogic iSCSI HBA Driver Dec 16 13:05:54.124621 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:05:54.143012 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:05:54.143980 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:05:54.198992 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:05:54.200954 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:05:54.261799 kernel: raid6: avx2x4 gen() 30215 MB/s Dec 16 13:05:54.278794 kernel: raid6: avx2x2 gen() 30943 MB/s Dec 16 13:05:54.296537 kernel: raid6: avx2x1 gen() 25467 MB/s Dec 16 13:05:54.296552 kernel: raid6: using algorithm avx2x2 gen() 30943 MB/s Dec 16 13:05:54.314556 kernel: raid6: .... xor() 19643 MB/s, rmw enabled Dec 16 13:05:54.314580 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:05:54.334796 kernel: xor: automatically using best checksumming function avx Dec 16 13:05:54.494797 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:05:54.503157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:05:54.505669 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:05:54.536440 systemd-udevd[454]: Using default interface naming scheme 'v255'. Dec 16 13:05:54.541689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:05:54.543075 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:05:54.566888 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Dec 16 13:05:54.598374 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:05:54.603093 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:05:54.685671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:05:54.691949 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:05:54.724799 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 13:05:54.729795 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 13:05:54.735992 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:05:54.736015 kernel: GPT:9289727 != 19775487 Dec 16 13:05:54.736032 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:05:54.736043 kernel: GPT:9289727 != 19775487 Dec 16 13:05:54.736052 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:05:54.736061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:05:54.745794 kernel: libata version 3.00 loaded. Dec 16 13:05:54.749797 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:05:54.757312 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:05:54.757366 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:05:54.757569 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:05:54.751642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:54.763556 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:05:54.770873 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:05:54.771017 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:05:54.751790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:54.754078 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:54.764668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:54.791842 kernel: scsi host0: ahci Dec 16 13:05:54.793793 kernel: AES CTR mode by8 optimization enabled Dec 16 13:05:54.795790 kernel: scsi host1: ahci Dec 16 13:05:54.799631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:54.803651 kernel: scsi host2: ahci Dec 16 13:05:54.807370 kernel: scsi host3: ahci Dec 16 13:05:54.807565 kernel: scsi host4: ahci Dec 16 13:05:54.807703 kernel: scsi host5: ahci Dec 16 13:05:54.809858 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Dec 16 13:05:54.809880 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Dec 16 13:05:54.811714 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Dec 16 13:05:54.814287 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Dec 16 13:05:54.814308 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Dec 16 13:05:54.815228 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 13:05:54.818637 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Dec 16 13:05:54.836749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:05:54.846537 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 13:05:54.847176 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 13:05:54.860951 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 13:05:54.865168 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:05:54.866981 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:54.867030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:54.872405 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:54.882310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:54.883385 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:05:54.890900 disk-uuid[617]: Primary Header is updated. Dec 16 13:05:54.890900 disk-uuid[617]: Secondary Entries is updated. Dec 16 13:05:54.890900 disk-uuid[617]: Secondary Header is updated. Dec 16 13:05:54.895127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:05:54.909940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:55.121803 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:05:55.123809 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:05:55.123862 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:05:55.130800 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:05:55.130826 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:05:55.134161 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 13:05:55.134176 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:05:55.134198 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 13:05:55.135198 kernel: ata3.00: applying bridge limits Dec 16 13:05:55.137033 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:05:55.137050 kernel: ata3.00: configured for UDMA/100 Dec 16 13:05:55.138804 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 13:05:55.194217 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 13:05:55.194422 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:05:55.215804 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 13:05:55.624429 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:05:55.627058 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:05:55.630669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:05:55.632849 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:05:55.637580 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:05:55.672060 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:05:55.904440 disk-uuid[621]: The operation has completed successfully. Dec 16 13:05:55.907308 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:05:55.936193 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:05:55.936339 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:05:55.965680 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:05:55.984824 sh[652]: Success Dec 16 13:05:56.003108 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:05:56.003148 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:05:56.004752 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:05:56.013816 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:05:56.041420 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:05:56.044956 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:05:56.062195 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:05:56.070795 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (664) Dec 16 13:05:56.073911 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:05:56.073937 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:56.079320 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:05:56.079337 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:05:56.080484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:05:56.081555 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:05:56.083828 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:05:56.084681 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:05:56.088865 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:05:56.116800 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (698) Dec 16 13:05:56.120337 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:56.120359 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:56.124141 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:05:56.124214 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:05:56.129806 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:56.130410 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:05:56.132709 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:05:56.221430 ignition[744]: Ignition 2.22.0 Dec 16 13:05:56.222477 ignition[744]: Stage: fetch-offline Dec 16 13:05:56.222519 ignition[744]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:56.222528 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:05:56.222607 ignition[744]: parsed url from cmdline: "" Dec 16 13:05:56.222610 ignition[744]: no config URL provided Dec 16 13:05:56.222615 ignition[744]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:05:56.222623 ignition[744]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:05:56.222643 ignition[744]: op(1): [started] loading QEMU firmware config module Dec 16 13:05:56.222648 ignition[744]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 13:05:56.231631 ignition[744]: op(1): [finished] loading QEMU firmware config module Dec 16 13:05:56.239524 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:05:56.246057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:05:56.289309 systemd-networkd[842]: lo: Link UP Dec 16 13:05:56.289321 systemd-networkd[842]: lo: Gained carrier Dec 16 13:05:56.290816 systemd-networkd[842]: Enumeration completed Dec 16 13:05:56.291169 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:56.291173 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:05:56.291407 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:05:56.291624 systemd-networkd[842]: eth0: Link UP Dec 16 13:05:56.292155 systemd-networkd[842]: eth0: Gained carrier Dec 16 13:05:56.292163 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:56.295502 systemd[1]: Reached target network.target - Network. Dec 16 13:05:56.321815 systemd-networkd[842]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:05:56.336092 ignition[744]: parsing config with SHA512: c42a20651e0ea28eeb8ce76f24aba639d4ecce5f99880fe76111c96a7b20721cf93a72a71ddbe7dc5b4474f13cf56055f5daa007437ab7d8ecf3dce2c760826d Dec 16 13:05:56.341041 unknown[744]: fetched base config from "system" Dec 16 13:05:56.341051 unknown[744]: fetched user config from "qemu" Dec 16 13:05:56.341409 ignition[744]: fetch-offline: fetch-offline passed Dec 16 13:05:56.344504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:05:56.341457 ignition[744]: Ignition finished successfully Dec 16 13:05:56.347795 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 13:05:56.350069 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:05:56.383469 ignition[847]: Ignition 2.22.0 Dec 16 13:05:56.383481 ignition[847]: Stage: kargs Dec 16 13:05:56.383611 ignition[847]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:56.383621 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:05:56.384408 ignition[847]: kargs: kargs passed Dec 16 13:05:56.384451 ignition[847]: Ignition finished successfully Dec 16 13:05:56.391469 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:05:56.393826 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:05:56.425036 ignition[854]: Ignition 2.22.0 Dec 16 13:05:56.425050 ignition[854]: Stage: disks Dec 16 13:05:56.425187 ignition[854]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:56.425198 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:05:56.425932 ignition[854]: disks: disks passed Dec 16 13:05:56.425978 ignition[854]: Ignition finished successfully Dec 16 13:05:56.431910 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:05:56.435866 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:05:56.439395 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:05:56.440334 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:05:56.444293 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:05:56.447456 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:05:56.452140 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:05:56.488916 systemd-resolved[247]: Detected conflict on linux IN A 10.0.0.87 Dec 16 13:05:56.488929 systemd-resolved[247]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Dec 16 13:05:56.490303 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:05:56.498049 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:05:56.502628 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:05:56.617818 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:05:56.618048 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:05:56.619350 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:05:56.622355 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:05:56.625493 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:05:56.627620 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:05:56.627673 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:05:56.627701 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:05:56.646484 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:05:56.654182 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Dec 16 13:05:56.654206 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:56.654217 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:56.654891 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:05:56.659706 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:05:56.659726 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:05:56.663207 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:05:56.711988 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:05:56.716454 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:05:56.721649 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:05:56.726809 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:05:56.816923 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:05:56.820694 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:05:56.822070 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:05:56.847842 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:56.858190 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:05:56.875498 ignition[986]: INFO : Ignition 2.22.0 Dec 16 13:05:56.875498 ignition[986]: INFO : Stage: mount Dec 16 13:05:56.878096 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:56.878096 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:05:56.878096 ignition[986]: INFO : mount: mount passed Dec 16 13:05:56.878096 ignition[986]: INFO : Ignition finished successfully Dec 16 13:05:56.886661 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:05:56.890263 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:05:57.070191 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:05:57.071830 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:05:57.101823 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Dec 16 13:05:57.101860 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:57.101873 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:57.106787 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:05:57.106820 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:05:57.108329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:05:57.142526 ignition[1015]: INFO : Ignition 2.22.0 Dec 16 13:05:57.142526 ignition[1015]: INFO : Stage: files Dec 16 13:05:57.145086 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:57.145086 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:05:57.149453 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:05:57.151918 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:05:57.151918 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:05:57.158365 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:05:57.160658 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:05:57.163153 unknown[1015]: wrote ssh authorized keys file for user: core Dec 16 13:05:57.164946 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:05:57.168375 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:05:57.171657 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:05:57.206323 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:05:57.253444 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:05:57.253444 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:05:57.259549 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:05:57.685058 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:05:57.787105 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:05:57.787105 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:05:57.793238 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:05:57.822380 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:05:57.822380 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:05:57.822380 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:05:58.173348 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:05:58.226980 systemd-networkd[842]: eth0: Gained IPv6LL Dec 16 13:05:58.526136 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:05:58.526136 ignition[1015]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:05:58.532259 ignition[1015]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:05:58.535390 ignition[1015]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:05:58.535390 ignition[1015]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:05:58.535390 ignition[1015]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 13:05:58.542967 ignition[1015]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:05:58.542967 ignition[1015]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:05:58.542967 ignition[1015]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 13:05:58.542967 ignition[1015]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 13:05:58.564232 ignition[1015]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:05:58.568994 ignition[1015]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:05:58.571568 ignition[1015]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 13:05:58.571568 ignition[1015]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:05:58.571568 ignition[1015]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:05:58.571568 ignition[1015]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:05:58.571568 ignition[1015]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:05:58.571568 ignition[1015]: INFO : files: files passed Dec 16 13:05:58.571568 ignition[1015]: INFO : Ignition finished successfully Dec 16 13:05:58.579883 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:05:58.590299 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:05:58.594242 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:05:58.607791 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:05:58.607931 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:05:58.614105 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 13:05:58.618695 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:58.621378 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:58.621188 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:05:58.626349 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:58.622831 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:05:58.627487 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:05:58.689266 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:05:58.689406 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:05:58.691705 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:05:58.692366 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:05:58.698337 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:05:58.699156 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:05:58.741767 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:05:58.743717 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:05:58.771549 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:05:58.772303 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:05:58.775795 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:05:58.779319 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:05:58.779441 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:05:58.784606 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:05:58.785492 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:05:58.792068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:05:58.792799 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:05:58.796350 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:05:58.799670 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:05:58.803338 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:05:58.806407 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:05:58.807248 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:05:58.807802 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:05:58.816269 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:05:58.819290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:05:58.819452 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:05:58.824019 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:05:58.824897 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:05:58.829309 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:05:58.832347 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:05:58.835682 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:05:58.835809 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:05:58.840741 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:05:58.840889 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:05:58.841788 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:05:58.846250 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:05:58.851883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:05:58.852731 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:05:58.856860 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:05:58.859584 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:05:58.859692 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:05:58.862399 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:05:58.862485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:05:58.865471 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:05:58.865609 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:05:58.868436 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:05:58.868539 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:05:58.872388 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:05:58.874314 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:05:58.874444 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:05:58.888794 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:05:58.889455 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:05:58.889588 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:05:58.892276 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:05:58.892393 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:05:58.905522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:05:58.908515 ignition[1070]: INFO : Ignition 2.22.0 Dec 16 13:05:58.908515 ignition[1070]: INFO : Stage: umount Dec 16 13:05:58.910954 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:58.910954 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:05:58.914448 ignition[1070]: INFO : umount: umount passed Dec 16 13:05:58.914448 ignition[1070]: INFO : Ignition finished successfully Dec 16 13:05:58.917388 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:05:58.918747 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:05:58.918892 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:05:58.924020 systemd[1]: Stopped target network.target - Network. Dec 16 13:05:58.924626 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:05:58.924677 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:05:58.925201 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:05:58.925242 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:05:58.930331 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:05:58.930381 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:05:58.935308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:05:58.935353 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:05:58.938485 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:05:58.941501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:05:58.953276 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:05:58.954852 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:05:58.961218 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:05:58.961362 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:05:58.962455 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:05:58.962572 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:05:58.971897 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:05:58.972222 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:05:58.972345 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:05:58.977148 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:05:58.977246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:05:59.135750 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:05:59.137499 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:05:59.142104 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:05:59.144238 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:05:59.147669 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:05:59.147714 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:05:59.153465 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:05:59.155125 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:05:59.155219 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:05:59.156289 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:05:59.156335 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:59.163735 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:05:59.163801 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:05:59.164622 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:05:59.170110 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:05:59.178291 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:05:59.178415 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:05:59.196472 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:05:59.196651 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:05:59.197640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:05:59.197681 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:05:59.202340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:05:59.202375 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:05:59.205474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:05:59.205522 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:05:59.211197 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:05:59.211245 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:05:59.215598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:05:59.215650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:05:59.221906 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:05:59.222627 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:05:59.222682 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:05:59.230506 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:05:59.230551 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:05:59.236134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:59.236192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:59.242497 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:05:59.242554 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:05:59.242599 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:05:59.258759 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:05:59.258889 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:05:59.259882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:05:59.260981 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:05:59.273669 systemd[1]: Switching root. Dec 16 13:05:59.310000 systemd-journald[201]: Journal stopped Dec 16 13:06:00.558756 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 16 13:06:00.558847 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:06:00.558866 kernel: SELinux: policy capability open_perms=1 Dec 16 13:06:00.558878 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:06:00.558894 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:06:00.558905 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:06:00.558916 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:06:00.558929 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:06:00.558946 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:06:00.558957 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:06:00.558968 kernel: audit: type=1403 audit(1765890359.729:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:06:00.558986 systemd[1]: Successfully loaded SELinux policy in 60.641ms. Dec 16 13:06:00.559000 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.146ms. Dec 16 13:06:00.559013 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:06:00.559025 systemd[1]: Detected virtualization kvm. Dec 16 13:06:00.559045 systemd[1]: Detected architecture x86-64. Dec 16 13:06:00.559059 systemd[1]: Detected first boot. Dec 16 13:06:00.559071 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:06:00.559083 zram_generator::config[1115]: No configuration found. Dec 16 13:06:00.559096 kernel: Guest personality initialized and is inactive Dec 16 13:06:00.559111 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:06:00.559122 kernel: Initialized host personality Dec 16 13:06:00.559134 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:06:00.559145 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:06:00.559161 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:06:00.559173 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:06:00.559185 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:06:00.559197 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:06:00.559209 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:06:00.559223 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:06:00.559235 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:06:00.559247 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:06:00.559258 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:06:00.559273 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:06:00.559285 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:06:00.559297 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:06:00.559309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:06:00.559321 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:06:00.559333 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:06:00.559345 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:06:00.559357 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:06:00.559372 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:06:00.559383 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:06:00.559396 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:06:00.559408 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:06:00.559420 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:06:00.559433 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:06:00.559445 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:06:00.559456 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:06:00.559471 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:06:00.559483 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:06:00.559498 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:06:00.559511 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:06:00.559524 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:06:00.559536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:06:00.559548 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:06:00.559560 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:06:00.559572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:06:00.559584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:06:00.559597 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:06:00.559609 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:06:00.559621 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:06:00.559633 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:06:00.559646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:00.559658 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:06:00.559669 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:06:00.559681 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:06:00.559696 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:06:00.559708 systemd[1]: Reached target machines.target - Containers. Dec 16 13:06:00.559720 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:06:00.559732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:00.559745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:06:00.559757 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:06:00.559769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:00.559799 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:06:00.559811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:00.559826 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:06:00.559839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:00.559851 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:06:00.559863 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:06:00.559875 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:06:00.559887 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:06:00.559899 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:06:00.559911 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:00.559925 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:06:00.559937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:06:00.559949 kernel: ACPI: bus type drm_connector registered Dec 16 13:06:00.559960 kernel: fuse: init (API version 7.41) Dec 16 13:06:00.559970 kernel: loop: module loaded Dec 16 13:06:00.559982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:06:00.559994 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:06:00.560006 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:06:00.560019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:06:00.560044 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:06:00.560056 systemd[1]: Stopped verity-setup.service. Dec 16 13:06:00.560070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:00.560082 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:06:00.560112 systemd-journald[1197]: Collecting audit messages is disabled. Dec 16 13:06:00.560140 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:06:00.560152 systemd-journald[1197]: Journal started Dec 16 13:06:00.560178 systemd-journald[1197]: Runtime Journal (/run/log/journal/b38e3154f2214537b62c01691a8dfc63) is 6M, max 48.1M, 42.1M free. Dec 16 13:06:00.249824 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:06:00.269574 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 13:06:00.270055 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:06:00.562890 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:06:00.564941 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:06:00.566654 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:06:00.568496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:06:00.570372 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:06:00.572213 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:06:00.574349 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:06:00.576605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:06:00.576861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:06:00.579029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:00.579255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:00.581328 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:06:00.581538 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:06:00.583484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:00.583696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:00.585916 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:06:00.586140 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:06:00.588147 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:00.588355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:00.590386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:06:00.592502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:06:00.594838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:06:00.597146 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:06:00.612051 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:06:00.615136 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:06:00.617960 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:06:00.619742 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:06:00.619850 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:06:00.622383 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:06:00.632752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:06:00.634616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:00.635840 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:06:00.638640 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:06:00.641173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:06:00.643468 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:06:00.645605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:06:00.647895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:00.650966 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:06:00.652471 systemd-journald[1197]: Time spent on flushing to /var/log/journal/b38e3154f2214537b62c01691a8dfc63 is 30.709ms for 1076 entries. Dec 16 13:06:00.652471 systemd-journald[1197]: System Journal (/var/log/journal/b38e3154f2214537b62c01691a8dfc63) is 8M, max 195.6M, 187.6M free. Dec 16 13:06:00.694825 systemd-journald[1197]: Received client request to flush runtime journal. Dec 16 13:06:00.694859 kernel: loop0: detected capacity change from 0 to 128560 Dec 16 13:06:00.655464 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:06:00.662045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:06:00.664389 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:06:00.667315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:06:00.671743 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:06:00.679359 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:06:00.701048 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:06:00.685986 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:06:00.699740 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:06:00.704350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:00.711085 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:06:00.714391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:06:00.730160 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:06:00.740936 kernel: loop1: detected capacity change from 0 to 110984 Dec 16 13:06:00.739647 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Dec 16 13:06:00.739672 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Dec 16 13:06:00.745671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:06:00.765798 kernel: loop2: detected capacity change from 0 to 229808 Dec 16 13:06:00.798816 kernel: loop3: detected capacity change from 0 to 128560 Dec 16 13:06:00.811809 kernel: loop4: detected capacity change from 0 to 110984 Dec 16 13:06:00.823803 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 13:06:00.834395 (sd-merge)[1256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 13:06:00.834972 (sd-merge)[1256]: Merged extensions into '/usr'. Dec 16 13:06:00.840178 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:06:00.840200 systemd[1]: Reloading... Dec 16 13:06:00.905853 zram_generator::config[1288]: No configuration found. Dec 16 13:06:00.974952 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:06:01.095834 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:06:01.096352 systemd[1]: Reloading finished in 255 ms. Dec 16 13:06:01.139421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:06:01.141606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:06:01.143855 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:06:01.162413 systemd[1]: Starting ensure-sysext.service... Dec 16 13:06:01.164737 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:06:01.167847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:06:01.177765 systemd[1]: Reload requested from client PID 1322 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:06:01.177795 systemd[1]: Reloading... Dec 16 13:06:01.184439 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:06:01.184838 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:06:01.185141 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:06:01.185399 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:06:01.186283 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:06:01.186547 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Dec 16 13:06:01.186613 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Dec 16 13:06:01.190693 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:06:01.190817 systemd-tmpfiles[1323]: Skipping /boot Dec 16 13:06:01.199078 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Dec 16 13:06:01.200620 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:06:01.200634 systemd-tmpfiles[1323]: Skipping /boot Dec 16 13:06:01.231788 zram_generator::config[1353]: No configuration found. Dec 16 13:06:01.356818 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:06:01.370798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:06:01.377807 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:06:01.412522 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 13:06:01.412822 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:06:01.413024 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:06:01.493638 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:06:01.493885 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:06:01.496524 systemd[1]: Reloading finished in 318 ms. Dec 16 13:06:01.509052 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:06:01.528337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:06:01.557268 systemd[1]: Finished ensure-sysext.service. Dec 16 13:06:01.592138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:01.593525 kernel: kvm_amd: TSC scaling supported Dec 16 13:06:01.593621 kernel: kvm_amd: Nested Virtualization enabled Dec 16 13:06:01.593641 kernel: kvm_amd: Nested Paging enabled Dec 16 13:06:01.593654 kernel: kvm_amd: LBR virtualization supported Dec 16 13:06:01.593669 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 13:06:01.593682 kernel: kvm_amd: Virtual GIF supported Dec 16 13:06:01.593493 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:06:01.600732 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:06:01.602896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:01.615203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:01.619117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:06:01.622202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:01.627513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:01.629402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:01.630747 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:06:01.634869 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:06:01.633892 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:01.637418 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:06:01.643944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:06:01.648631 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:06:01.654203 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:06:01.657080 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:06:01.661043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:01.662816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:01.664339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:01.664557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:01.667344 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:06:01.671373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:06:01.673439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:01.673881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:01.676516 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:01.676839 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:01.679087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:06:01.681561 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:06:01.696651 augenrules[1485]: No rules Dec 16 13:06:01.698195 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:06:01.698537 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:06:01.701932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:06:01.701995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:06:01.703261 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:06:01.706947 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:06:01.715861 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:06:01.717160 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:06:01.718737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:06:01.727680 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:06:01.731174 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:01.755687 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:06:01.827414 systemd-networkd[1455]: lo: Link UP Dec 16 13:06:01.827426 systemd-networkd[1455]: lo: Gained carrier Dec 16 13:06:01.829042 systemd-networkd[1455]: Enumeration completed Dec 16 13:06:01.829125 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:06:01.830420 systemd-networkd[1455]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:06:01.830432 systemd-networkd[1455]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:06:01.830931 systemd-networkd[1455]: eth0: Link UP Dec 16 13:06:01.831124 systemd-networkd[1455]: eth0: Gained carrier Dec 16 13:06:01.831146 systemd-networkd[1455]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:06:01.832407 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:06:01.835256 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:06:01.837185 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:06:01.839211 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:06:01.839425 systemd-resolved[1459]: Positive Trust Anchors: Dec 16 13:06:01.839664 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:06:01.839737 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:06:01.843614 systemd-resolved[1459]: Defaulting to hostname 'linux'. Dec 16 13:06:01.845210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:06:01.847038 systemd[1]: Reached target network.target - Network. Dec 16 13:06:01.848439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:06:01.850283 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:06:01.850930 systemd-networkd[1455]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:06:01.851600 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Dec 16 13:06:01.852031 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:06:03.217219 systemd-resolved[1459]: Clock change detected. Flushing caches. Dec 16 13:06:03.217233 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:06:03.217262 systemd-timesyncd[1460]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 13:06:03.217300 systemd-timesyncd[1460]: Initial clock synchronization to Tue 2025-12-16 13:06:03.217182 UTC. Dec 16 13:06:03.219247 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:06:03.221248 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:06:03.223135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:06:03.225158 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:06:03.227198 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:06:03.227229 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:06:03.228852 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:06:03.231128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:06:03.234468 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:06:03.237845 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:06:03.239992 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:06:03.241980 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:06:03.249261 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:06:03.251355 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:06:03.254246 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:06:03.256344 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:06:03.259995 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:06:03.261519 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:06:03.263014 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:06:03.263040 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:06:03.264062 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:06:03.266768 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:06:03.280009 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:06:03.283115 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:06:03.285738 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:06:03.287375 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:06:03.288659 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:06:03.291656 jq[1517]: false Dec 16 13:06:03.291774 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:06:03.294986 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:06:03.297768 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:06:03.302357 extend-filesystems[1518]: Found /dev/vda6 Dec 16 13:06:03.302185 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:06:03.305688 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:06:03.308062 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:06:03.308496 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:06:03.310957 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 16 13:06:03.311195 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 16 13:06:03.311661 extend-filesystems[1518]: Found /dev/vda9 Dec 16 13:06:03.312964 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:06:03.316649 extend-filesystems[1518]: Checking size of /dev/vda9 Dec 16 13:06:03.319943 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 16 13:06:03.319943 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:06:03.319943 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 16 13:06:03.319684 oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 16 13:06:03.319705 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:06:03.319763 oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 16 13:06:03.323775 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:06:03.329444 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 16 13:06:03.329444 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:06:03.328838 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:06:03.327984 oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 16 13:06:03.327993 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:06:03.331253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:06:03.331492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:06:03.331830 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:06:03.332105 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:06:03.334204 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:06:03.336718 jq[1538]: true Dec 16 13:06:03.334447 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:06:03.337937 extend-filesystems[1518]: Resized partition /dev/vda9 Dec 16 13:06:03.343272 extend-filesystems[1544]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:06:03.350804 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 13:06:03.339696 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:06:03.350939 update_engine[1534]: I20251216 13:06:03.346230 1534 main.cc:92] Flatcar Update Engine starting Dec 16 13:06:03.340122 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:06:03.354143 jq[1546]: true Dec 16 13:06:03.360422 tar[1543]: linux-amd64/LICENSE Dec 16 13:06:03.360793 tar[1543]: linux-amd64/helm Dec 16 13:06:03.362227 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:06:03.395151 dbus-daemon[1515]: [system] SELinux support is enabled Dec 16 13:06:03.400577 update_engine[1534]: I20251216 13:06:03.398566 1534 update_check_scheduler.cc:74] Next update check in 3m20s Dec 16 13:06:03.398981 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:06:03.402884 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 13:06:03.408285 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:06:03.408305 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:06:03.411927 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:06:03.411943 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:06:03.419937 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:06:03.425329 systemd-logind[1530]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:06:03.425351 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:06:03.425862 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 13:06:03.425862 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 13:06:03.425862 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 13:06:03.425730 systemd-logind[1530]: New seat seat0. Dec 16 13:06:03.433441 extend-filesystems[1518]: Resized filesystem in /dev/vda9 Dec 16 13:06:03.435077 bash[1575]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:06:03.436195 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:06:03.438117 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:06:03.440198 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:06:03.440468 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:06:03.442913 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:06:03.447932 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:06:03.497698 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:06:03.542671 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:06:03.566836 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:06:03.571576 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:06:03.572202 containerd[1547]: time="2025-12-16T13:06:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:06:03.572966 containerd[1547]: time="2025-12-16T13:06:03.572917448Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:06:03.584292 containerd[1547]: time="2025-12-16T13:06:03.584249805Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.821µs" Dec 16 13:06:03.584292 containerd[1547]: time="2025-12-16T13:06:03.584283588Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:06:03.584356 containerd[1547]: time="2025-12-16T13:06:03.584302494Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:06:03.584510 containerd[1547]: time="2025-12-16T13:06:03.584478173Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:06:03.584510 containerd[1547]: time="2025-12-16T13:06:03.584497860Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:06:03.584550 containerd[1547]: time="2025-12-16T13:06:03.584520863Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:06:03.584604 containerd[1547]: time="2025-12-16T13:06:03.584580876Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:06:03.584604 containerd[1547]: time="2025-12-16T13:06:03.584595233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:06:03.584910 containerd[1547]: time="2025-12-16T13:06:03.584858266Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:06:03.584910 containerd[1547]: time="2025-12-16T13:06:03.584900355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:06:03.584910 containerd[1547]: time="2025-12-16T13:06:03.584911125Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:06:03.584987 containerd[1547]: time="2025-12-16T13:06:03.584919892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:06:03.585025 containerd[1547]: time="2025-12-16T13:06:03.585008317Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:06:03.585270 containerd[1547]: time="2025-12-16T13:06:03.585232819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:06:03.585270 containerd[1547]: time="2025-12-16T13:06:03.585266923Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:06:03.585325 containerd[1547]: time="2025-12-16T13:06:03.585276881Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:06:03.585362 containerd[1547]: time="2025-12-16T13:06:03.585345079Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:06:03.585664 containerd[1547]: time="2025-12-16T13:06:03.585641415Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:06:03.585727 containerd[1547]: time="2025-12-16T13:06:03.585707258Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:06:03.595133 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:06:03.595428 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:06:03.598048 containerd[1547]: time="2025-12-16T13:06:03.597983505Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:06:03.598150 containerd[1547]: time="2025-12-16T13:06:03.598064187Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:06:03.598150 containerd[1547]: time="2025-12-16T13:06:03.598096387Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:06:03.598150 containerd[1547]: time="2025-12-16T13:06:03.598109732Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:06:03.598150 containerd[1547]: time="2025-12-16T13:06:03.598122556Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:06:03.598150 containerd[1547]: time="2025-12-16T13:06:03.598132324Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:06:03.598150 containerd[1547]: time="2025-12-16T13:06:03.598145659Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:06:03.598273 containerd[1547]: time="2025-12-16T13:06:03.598158554Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:06:03.598273 containerd[1547]: time="2025-12-16T13:06:03.598175876Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:06:03.598273 containerd[1547]: time="2025-12-16T13:06:03.598188560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:06:03.598273 containerd[1547]: time="2025-12-16T13:06:03.598197797Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:06:03.598273 containerd[1547]: time="2025-12-16T13:06:03.598212094Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:06:03.598401 containerd[1547]: time="2025-12-16T13:06:03.598378466Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:06:03.598425 containerd[1547]: time="2025-12-16T13:06:03.598407370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:06:03.598425 containerd[1547]: time="2025-12-16T13:06:03.598422318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:06:03.598461 containerd[1547]: time="2025-12-16T13:06:03.598433359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:06:03.598461 containerd[1547]: time="2025-12-16T13:06:03.598445211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:06:03.598498 containerd[1547]: time="2025-12-16T13:06:03.598456482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:06:03.598498 containerd[1547]: time="2025-12-16T13:06:03.598476330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:06:03.598498 containerd[1547]: time="2025-12-16T13:06:03.598486579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:06:03.598498 containerd[1547]: time="2025-12-16T13:06:03.598496668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:06:03.598579 containerd[1547]: time="2025-12-16T13:06:03.598507328Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:06:03.598579 containerd[1547]: time="2025-12-16T13:06:03.598524179Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:06:03.598738 containerd[1547]: time="2025-12-16T13:06:03.598706481Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:06:03.598738 containerd[1547]: time="2025-12-16T13:06:03.598736948Z" level=info msg="Start snapshots syncer" Dec 16 13:06:03.598780 containerd[1547]: time="2025-12-16T13:06:03.598768307Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:06:03.599299 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:06:03.601204 containerd[1547]: time="2025-12-16T13:06:03.601165814Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:06:03.601382 containerd[1547]: time="2025-12-16T13:06:03.601361421Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:06:03.601529 containerd[1547]: time="2025-12-16T13:06:03.601511433Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:06:03.601708 containerd[1547]: time="2025-12-16T13:06:03.601690879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:06:03.601907 containerd[1547]: time="2025-12-16T13:06:03.601797189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:06:03.602094 containerd[1547]: time="2025-12-16T13:06:03.602069600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:06:03.602161 containerd[1547]: time="2025-12-16T13:06:03.602147957Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:06:03.602210 containerd[1547]: time="2025-12-16T13:06:03.602199203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:06:03.602256 containerd[1547]: time="2025-12-16T13:06:03.602245429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:06:03.602301 containerd[1547]: time="2025-12-16T13:06:03.602290784Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:06:03.602362 containerd[1547]: time="2025-12-16T13:06:03.602351128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:06:03.602416 containerd[1547]: time="2025-12-16T13:06:03.602404788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:06:03.602468 containerd[1547]: time="2025-12-16T13:06:03.602457457Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:06:03.602558 containerd[1547]: time="2025-12-16T13:06:03.602544911Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:06:03.602670 containerd[1547]: time="2025-12-16T13:06:03.602656651Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:06:03.602716 containerd[1547]: time="2025-12-16T13:06:03.602705192Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:06:03.602760 containerd[1547]: time="2025-12-16T13:06:03.602748874Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:06:03.602805 containerd[1547]: time="2025-12-16T13:06:03.602792225Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:06:03.602851 containerd[1547]: time="2025-12-16T13:06:03.602841147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:06:03.602936 containerd[1547]: time="2025-12-16T13:06:03.602921908Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:06:03.602990 containerd[1547]: time="2025-12-16T13:06:03.602980578Z" level=info msg="runtime interface created" Dec 16 13:06:03.603028 containerd[1547]: time="2025-12-16T13:06:03.603019601Z" level=info msg="created NRI interface" Dec 16 13:06:03.603090 containerd[1547]: time="2025-12-16T13:06:03.603068684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:06:03.603141 containerd[1547]: time="2025-12-16T13:06:03.603130880Z" level=info msg="Connect containerd service" Dec 16 13:06:03.603197 containerd[1547]: time="2025-12-16T13:06:03.603186204Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:06:03.603953 containerd[1547]: time="2025-12-16T13:06:03.603934998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:06:03.621475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:06:03.626406 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:06:03.629588 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:06:03.631824 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:06:03.689268 containerd[1547]: time="2025-12-16T13:06:03.689204189Z" level=info msg="Start subscribing containerd event" Dec 16 13:06:03.689362 containerd[1547]: time="2025-12-16T13:06:03.689270734Z" level=info msg="Start recovering state" Dec 16 13:06:03.689383 containerd[1547]: time="2025-12-16T13:06:03.689354541Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:06:03.689448 containerd[1547]: time="2025-12-16T13:06:03.689423661Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:06:03.689448 containerd[1547]: time="2025-12-16T13:06:03.689443157Z" level=info msg="Start event monitor" Dec 16 13:06:03.689490 containerd[1547]: time="2025-12-16T13:06:03.689460049Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:06:03.689490 containerd[1547]: time="2025-12-16T13:06:03.689475228Z" level=info msg="Start streaming server" Dec 16 13:06:03.689490 containerd[1547]: time="2025-12-16T13:06:03.689491067Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:06:03.689490 containerd[1547]: time="2025-12-16T13:06:03.689498110Z" level=info msg="runtime interface starting up..." Dec 16 13:06:03.689490 containerd[1547]: time="2025-12-16T13:06:03.689503901Z" level=info msg="starting plugins..." Dec 16 13:06:03.689617 containerd[1547]: time="2025-12-16T13:06:03.689524790Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:06:03.689796 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:06:03.690548 containerd[1547]: time="2025-12-16T13:06:03.690523925Z" level=info msg="containerd successfully booted in 0.119134s" Dec 16 13:06:03.692215 tar[1543]: linux-amd64/README.md Dec 16 13:06:03.719231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:06:05.094116 systemd-networkd[1455]: eth0: Gained IPv6LL Dec 16 13:06:05.097426 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:06:05.100275 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:06:05.103739 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 13:06:05.106963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:05.110186 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:06:05.143118 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:06:05.146734 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 13:06:05.147044 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 13:06:05.149350 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:06:05.812678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:05.815088 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:06:05.816953 systemd[1]: Startup finished in 2.848s (kernel) + 6.082s (initrd) + 4.782s (userspace) = 13.713s. Dec 16 13:06:05.817609 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:06.236674 kubelet[1650]: E1216 13:06:06.236532 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:06.240892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:06.241109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:06.241540 systemd[1]: kubelet.service: Consumed 978ms CPU time, 268.8M memory peak. Dec 16 13:06:07.956704 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:06:07.958115 systemd[1]: Started sshd@0-10.0.0.87:22-10.0.0.1:60974.service - OpenSSH per-connection server daemon (10.0.0.1:60974). Dec 16 13:06:08.027615 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 60974 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:06:08.029356 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.035615 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:06:08.036644 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:06:08.042635 systemd-logind[1530]: New session 1 of user core. Dec 16 13:06:08.058268 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:06:08.061094 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:06:08.077269 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:06:08.079694 systemd-logind[1530]: New session c1 of user core. Dec 16 13:06:08.219538 systemd[1668]: Queued start job for default target default.target. Dec 16 13:06:08.231071 systemd[1668]: Created slice app.slice - User Application Slice. Dec 16 13:06:08.231095 systemd[1668]: Reached target paths.target - Paths. Dec 16 13:06:08.231130 systemd[1668]: Reached target timers.target - Timers. Dec 16 13:06:08.232548 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:06:08.243433 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:06:08.243552 systemd[1668]: Reached target sockets.target - Sockets. Dec 16 13:06:08.243592 systemd[1668]: Reached target basic.target - Basic System. Dec 16 13:06:08.243630 systemd[1668]: Reached target default.target - Main User Target. Dec 16 13:06:08.243662 systemd[1668]: Startup finished in 157ms. Dec 16 13:06:08.243922 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:06:08.245400 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:06:08.310432 systemd[1]: Started sshd@1-10.0.0.87:22-10.0.0.1:60982.service - OpenSSH per-connection server daemon (10.0.0.1:60982). Dec 16 13:06:08.352756 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 60982 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:06:08.354019 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.358091 systemd-logind[1530]: New session 2 of user core. Dec 16 13:06:08.372991 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:06:08.425162 sshd[1682]: Connection closed by 10.0.0.1 port 60982 Dec 16 13:06:08.425542 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:08.438125 systemd[1]: sshd@1-10.0.0.87:22-10.0.0.1:60982.service: Deactivated successfully. Dec 16 13:06:08.439736 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:06:08.440380 systemd-logind[1530]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:06:08.442844 systemd[1]: Started sshd@2-10.0.0.87:22-10.0.0.1:60986.service - OpenSSH per-connection server daemon (10.0.0.1:60986). Dec 16 13:06:08.443383 systemd-logind[1530]: Removed session 2. Dec 16 13:06:08.499767 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 60986 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:06:08.500902 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.505210 systemd-logind[1530]: New session 3 of user core. Dec 16 13:06:08.519006 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:06:08.567262 sshd[1691]: Connection closed by 10.0.0.1 port 60986 Dec 16 13:06:08.567691 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:08.578509 systemd[1]: sshd@2-10.0.0.87:22-10.0.0.1:60986.service: Deactivated successfully. Dec 16 13:06:08.580378 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:06:08.581079 systemd-logind[1530]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:06:08.583893 systemd[1]: Started sshd@3-10.0.0.87:22-10.0.0.1:32770.service - OpenSSH per-connection server daemon (10.0.0.1:32770). Dec 16 13:06:08.584433 systemd-logind[1530]: Removed session 3. Dec 16 13:06:08.631015 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 32770 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:06:08.632474 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.636726 systemd-logind[1530]: New session 4 of user core. Dec 16 13:06:08.645994 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:06:08.698333 sshd[1700]: Connection closed by 10.0.0.1 port 32770 Dec 16 13:06:08.698700 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:08.710333 systemd[1]: sshd@3-10.0.0.87:22-10.0.0.1:32770.service: Deactivated successfully. Dec 16 13:06:08.712001 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:06:08.712643 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:06:08.714985 systemd[1]: Started sshd@4-10.0.0.87:22-10.0.0.1:32780.service - OpenSSH per-connection server daemon (10.0.0.1:32780). Dec 16 13:06:08.715528 systemd-logind[1530]: Removed session 4. Dec 16 13:06:08.755925 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 32780 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:06:08.757014 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.760670 systemd-logind[1530]: New session 5 of user core. Dec 16 13:06:08.774012 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:06:08.830116 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:06:08.830403 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:08.844221 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:08.845682 sshd[1709]: Connection closed by 10.0.0.1 port 32780 Dec 16 13:06:08.846125 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:08.856227 systemd[1]: sshd@4-10.0.0.87:22-10.0.0.1:32780.service: Deactivated successfully. Dec 16 13:06:08.857809 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:06:08.858540 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:06:08.861092 systemd[1]: Started sshd@5-10.0.0.87:22-10.0.0.1:32796.service - OpenSSH per-connection server daemon (10.0.0.1:32796). Dec 16 13:06:08.861820 systemd-logind[1530]: Removed session 5. Dec 16 13:06:08.920292 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 32796 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:06:08.921540 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.925683 systemd-logind[1530]: New session 6 of user core. Dec 16 13:06:08.935996 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:06:08.988125 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:06:08.988408 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:08.994140 sudo[1721]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:09.000023 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:06:09.000320 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:09.009600 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:06:09.046481 augenrules[1743]: No rules Dec 16 13:06:09.048232 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:06:09.048514 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:06:09.049664 sudo[1720]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:09.051037 sshd[1719]: Connection closed by 10.0.0.1 port 32796 Dec 16 13:06:09.051396 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:09.063481 systemd[1]: sshd@5-10.0.0.87:22-10.0.0.1:32796.service: Deactivated successfully. Dec 16 13:06:09.065217 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:06:09.065943 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:06:09.068588 systemd[1]: Started sshd@6-10.0.0.87:22-10.0.0.1:32812.service - OpenSSH per-connection server daemon (10.0.0.1:32812). Dec 16 13:06:09.069176 systemd-logind[1530]: Removed session 6. Dec 16 13:06:09.128497 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 32812 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:06:09.129741 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:09.133760 systemd-logind[1530]: New session 7 of user core. Dec 16 13:06:09.139981 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:06:09.191349 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:06:09.191634 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:09.491890 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:06:09.521239 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:06:09.735469 dockerd[1777]: time="2025-12-16T13:06:09.735407378Z" level=info msg="Starting up" Dec 16 13:06:09.736205 dockerd[1777]: time="2025-12-16T13:06:09.736166272Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:06:09.747123 dockerd[1777]: time="2025-12-16T13:06:09.747028276Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:06:10.080878 dockerd[1777]: time="2025-12-16T13:06:10.080764289Z" level=info msg="Loading containers: start." Dec 16 13:06:10.090897 kernel: Initializing XFRM netlink socket Dec 16 13:06:10.333920 systemd-networkd[1455]: docker0: Link UP Dec 16 13:06:10.339493 dockerd[1777]: time="2025-12-16T13:06:10.339447505Z" level=info msg="Loading containers: done." Dec 16 13:06:10.352430 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck168296649-merged.mount: Deactivated successfully. Dec 16 13:06:10.353777 dockerd[1777]: time="2025-12-16T13:06:10.353739492Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:06:10.353834 dockerd[1777]: time="2025-12-16T13:06:10.353815204Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:06:10.353958 dockerd[1777]: time="2025-12-16T13:06:10.353936702Z" level=info msg="Initializing buildkit" Dec 16 13:06:10.382447 dockerd[1777]: time="2025-12-16T13:06:10.382415651Z" level=info msg="Completed buildkit initialization" Dec 16 13:06:10.388281 dockerd[1777]: time="2025-12-16T13:06:10.388237441Z" level=info msg="Daemon has completed initialization" Dec 16 13:06:10.388332 dockerd[1777]: time="2025-12-16T13:06:10.388296562Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:06:10.388480 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:06:11.041990 containerd[1547]: time="2025-12-16T13:06:11.041951892Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:06:11.689782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205170887.mount: Deactivated successfully. Dec 16 13:06:12.565786 containerd[1547]: time="2025-12-16T13:06:12.565729243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:12.566649 containerd[1547]: time="2025-12-16T13:06:12.566617449Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 16 13:06:12.567453 containerd[1547]: time="2025-12-16T13:06:12.567420054Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:12.570070 containerd[1547]: time="2025-12-16T13:06:12.570048414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:12.570883 containerd[1547]: time="2025-12-16T13:06:12.570814221Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.528825069s" Dec 16 13:06:12.570883 containerd[1547]: time="2025-12-16T13:06:12.570884753Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:06:12.571449 containerd[1547]: time="2025-12-16T13:06:12.571421009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:06:13.759512 containerd[1547]: time="2025-12-16T13:06:13.759456103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:13.760313 containerd[1547]: time="2025-12-16T13:06:13.760263448Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 16 13:06:13.761468 containerd[1547]: time="2025-12-16T13:06:13.761428623Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:13.764048 containerd[1547]: time="2025-12-16T13:06:13.764022499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:13.764955 containerd[1547]: time="2025-12-16T13:06:13.764913249Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.193461392s" Dec 16 13:06:13.764955 containerd[1547]: time="2025-12-16T13:06:13.764950710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:06:13.765392 containerd[1547]: time="2025-12-16T13:06:13.765360598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:06:14.981342 containerd[1547]: time="2025-12-16T13:06:14.981268143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:14.982497 containerd[1547]: time="2025-12-16T13:06:14.982466010Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 16 13:06:14.983671 containerd[1547]: time="2025-12-16T13:06:14.983621979Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:14.988046 containerd[1547]: time="2025-12-16T13:06:14.988012474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:14.989088 containerd[1547]: time="2025-12-16T13:06:14.989042857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.223657942s" Dec 16 13:06:14.989088 containerd[1547]: time="2025-12-16T13:06:14.989085577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:06:14.989519 containerd[1547]: time="2025-12-16T13:06:14.989482491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:06:16.080548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058716358.mount: Deactivated successfully. Dec 16 13:06:16.491453 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:06:16.493021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:16.772007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:16.776227 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:16.848272 containerd[1547]: time="2025-12-16T13:06:16.848185827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:16.849839 containerd[1547]: time="2025-12-16T13:06:16.849788403Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 16 13:06:16.851174 containerd[1547]: time="2025-12-16T13:06:16.851143325Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:16.853109 containerd[1547]: time="2025-12-16T13:06:16.853074187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:16.853605 containerd[1547]: time="2025-12-16T13:06:16.853580947Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.864050136s" Dec 16 13:06:16.853661 containerd[1547]: time="2025-12-16T13:06:16.853609651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:06:16.854157 containerd[1547]: time="2025-12-16T13:06:16.854136259Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:06:16.864741 kubelet[2078]: E1216 13:06:16.864554 2078 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:16.870664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:16.870918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:16.871361 systemd[1]: kubelet.service: Consumed 224ms CPU time, 112M memory peak. Dec 16 13:06:17.476971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181720864.mount: Deactivated successfully. Dec 16 13:06:18.487468 containerd[1547]: time="2025-12-16T13:06:18.487382311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:18.488426 containerd[1547]: time="2025-12-16T13:06:18.488380774Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 16 13:06:18.489644 containerd[1547]: time="2025-12-16T13:06:18.489617364Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:18.491933 containerd[1547]: time="2025-12-16T13:06:18.491907890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:18.492733 containerd[1547]: time="2025-12-16T13:06:18.492704344Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.63854406s" Dec 16 13:06:18.492733 containerd[1547]: time="2025-12-16T13:06:18.492732938Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:06:18.493188 containerd[1547]: time="2025-12-16T13:06:18.493166000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:06:18.984358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325878398.mount: Deactivated successfully. Dec 16 13:06:18.992688 containerd[1547]: time="2025-12-16T13:06:18.992638347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:18.993537 containerd[1547]: time="2025-12-16T13:06:18.993491207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:06:18.994932 containerd[1547]: time="2025-12-16T13:06:18.994900440Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:18.996920 containerd[1547]: time="2025-12-16T13:06:18.996889792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:18.997512 containerd[1547]: time="2025-12-16T13:06:18.997467235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 504.277771ms" Dec 16 13:06:18.997512 containerd[1547]: time="2025-12-16T13:06:18.997502160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:06:18.998028 containerd[1547]: time="2025-12-16T13:06:18.997998692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:06:19.641457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059060124.mount: Deactivated successfully. Dec 16 13:06:22.089677 containerd[1547]: time="2025-12-16T13:06:22.089603689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:22.090638 containerd[1547]: time="2025-12-16T13:06:22.090585791Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 16 13:06:22.091918 containerd[1547]: time="2025-12-16T13:06:22.091879989Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:22.094439 containerd[1547]: time="2025-12-16T13:06:22.094406077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:22.095371 containerd[1547]: time="2025-12-16T13:06:22.095334148Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.097307203s" Dec 16 13:06:22.095371 containerd[1547]: time="2025-12-16T13:06:22.095361469Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:06:25.082526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:25.082743 systemd[1]: kubelet.service: Consumed 224ms CPU time, 112M memory peak. Dec 16 13:06:25.085173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:25.108461 systemd[1]: Reload requested from client PID 2229 ('systemctl') (unit session-7.scope)... Dec 16 13:06:25.108476 systemd[1]: Reloading... Dec 16 13:06:25.190020 zram_generator::config[2274]: No configuration found. Dec 16 13:06:25.467384 systemd[1]: Reloading finished in 358 ms. Dec 16 13:06:25.536601 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:06:25.536700 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:06:25.537018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:25.537063 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.3M memory peak. Dec 16 13:06:25.538778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:25.723601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:25.727553 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:06:25.763133 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:25.763133 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:06:25.763133 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:25.763457 kubelet[2320]: I1216 13:06:25.763179 2320 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:06:26.039248 kubelet[2320]: I1216 13:06:26.039154 2320 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:06:26.039248 kubelet[2320]: I1216 13:06:26.039182 2320 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:06:26.039434 kubelet[2320]: I1216 13:06:26.039413 2320 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:06:26.067800 kubelet[2320]: E1216 13:06:26.067743 2320 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:06:26.068977 kubelet[2320]: I1216 13:06:26.068948 2320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:06:26.077148 kubelet[2320]: I1216 13:06:26.077116 2320 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:06:26.082450 kubelet[2320]: I1216 13:06:26.082422 2320 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:06:26.082679 kubelet[2320]: I1216 13:06:26.082644 2320 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:06:26.082819 kubelet[2320]: I1216 13:06:26.082671 2320 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:06:26.082935 kubelet[2320]: I1216 13:06:26.082825 2320 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:06:26.082935 kubelet[2320]: I1216 13:06:26.082834 2320 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:06:26.083564 kubelet[2320]: I1216 13:06:26.083546 2320 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:26.086511 kubelet[2320]: I1216 13:06:26.085533 2320 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:06:26.086511 kubelet[2320]: I1216 13:06:26.085563 2320 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:06:26.086511 kubelet[2320]: I1216 13:06:26.085586 2320 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:06:26.086511 kubelet[2320]: I1216 13:06:26.085600 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:06:26.091039 kubelet[2320]: I1216 13:06:26.091019 2320 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:06:26.091442 kubelet[2320]: I1216 13:06:26.091405 2320 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:06:26.091593 kubelet[2320]: E1216 13:06:26.091519 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:06:26.092364 kubelet[2320]: W1216 13:06:26.092331 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:06:26.093319 kubelet[2320]: E1216 13:06:26.093233 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:06:26.095204 kubelet[2320]: I1216 13:06:26.095163 2320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:06:26.095265 kubelet[2320]: I1216 13:06:26.095237 2320 server.go:1289] "Started kubelet" Dec 16 13:06:26.095743 kubelet[2320]: I1216 13:06:26.095615 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:06:26.096958 kubelet[2320]: I1216 13:06:26.096758 2320 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:06:26.096958 kubelet[2320]: I1216 13:06:26.096753 2320 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:06:26.101533 kubelet[2320]: I1216 13:06:26.100628 2320 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:06:26.102126 kubelet[2320]: I1216 13:06:26.102109 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:06:26.102760 kubelet[2320]: E1216 13:06:26.101799 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b3f3377b3239 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 13:06:26.095190585 +0000 UTC m=+0.363839580,LastTimestamp:2025-12-16 13:06:26.095190585 +0000 UTC m=+0.363839580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 13:06:26.102914 kubelet[2320]: I1216 13:06:26.102891 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:06:26.103131 kubelet[2320]: E1216 13:06:26.102971 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:06:26.103204 kubelet[2320]: I1216 13:06:26.103184 2320 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:06:26.103425 kubelet[2320]: I1216 13:06:26.103404 2320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:06:26.103553 kubelet[2320]: I1216 13:06:26.103535 2320 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:06:26.104851 kubelet[2320]: E1216 13:06:26.104828 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:06:26.105066 kubelet[2320]: E1216 13:06:26.105028 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="200ms" Dec 16 13:06:26.105730 kubelet[2320]: E1216 13:06:26.105714 2320 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:06:26.106692 kubelet[2320]: I1216 13:06:26.106676 2320 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:06:26.106755 kubelet[2320]: I1216 13:06:26.106746 2320 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:06:26.107020 kubelet[2320]: I1216 13:06:26.107000 2320 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:06:26.118269 kubelet[2320]: I1216 13:06:26.118254 2320 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:06:26.118356 kubelet[2320]: I1216 13:06:26.118333 2320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:06:26.118356 kubelet[2320]: I1216 13:06:26.118350 2320 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:26.123039 kubelet[2320]: I1216 13:06:26.122990 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:06:26.124385 kubelet[2320]: I1216 13:06:26.124351 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:06:26.124385 kubelet[2320]: I1216 13:06:26.124380 2320 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:06:26.124476 kubelet[2320]: I1216 13:06:26.124404 2320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:06:26.124476 kubelet[2320]: I1216 13:06:26.124411 2320 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:06:26.124476 kubelet[2320]: E1216 13:06:26.124447 2320 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:06:26.124942 kubelet[2320]: E1216 13:06:26.124914 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:06:26.203554 kubelet[2320]: E1216 13:06:26.203521 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:06:26.224913 kubelet[2320]: E1216 13:06:26.224882 2320 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:06:26.304299 kubelet[2320]: E1216 13:06:26.304142 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:06:26.305668 kubelet[2320]: E1216 13:06:26.305628 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="400ms" Dec 16 13:06:26.405157 kubelet[2320]: E1216 13:06:26.405100 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:06:26.425278 kubelet[2320]: E1216 13:06:26.425240 2320 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:06:26.450214 kubelet[2320]: I1216 13:06:26.450168 2320 policy_none.go:49] "None policy: Start" Dec 16 13:06:26.450214 kubelet[2320]: I1216 13:06:26.450196 2320 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:06:26.450295 kubelet[2320]: I1216 13:06:26.450222 2320 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:06:26.456750 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:06:26.472241 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:06:26.475622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:06:26.488794 kubelet[2320]: E1216 13:06:26.488753 2320 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:06:26.489033 kubelet[2320]: I1216 13:06:26.488999 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:06:26.489033 kubelet[2320]: I1216 13:06:26.489020 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:06:26.489440 kubelet[2320]: I1216 13:06:26.489214 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:06:26.490312 kubelet[2320]: E1216 13:06:26.490278 2320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:06:26.490379 kubelet[2320]: E1216 13:06:26.490351 2320 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 13:06:26.590552 kubelet[2320]: I1216 13:06:26.590464 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:06:26.590846 kubelet[2320]: E1216 13:06:26.590808 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Dec 16 13:06:26.706315 kubelet[2320]: E1216 13:06:26.706258 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="800ms" Dec 16 13:06:26.792349 kubelet[2320]: I1216 13:06:26.792329 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:06:26.792941 kubelet[2320]: E1216 13:06:26.792601 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Dec 16 13:06:26.909083 kubelet[2320]: I1216 13:06:26.908984 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46518866e9b4e43ed8b9aab44f1c7c08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46518866e9b4e43ed8b9aab44f1c7c08\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:26.909083 kubelet[2320]: I1216 13:06:26.909066 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46518866e9b4e43ed8b9aab44f1c7c08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46518866e9b4e43ed8b9aab44f1c7c08\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:26.909191 kubelet[2320]: I1216 13:06:26.909106 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46518866e9b4e43ed8b9aab44f1c7c08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46518866e9b4e43ed8b9aab44f1c7c08\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:27.152160 kubelet[2320]: E1216 13:06:27.152109 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:06:27.194530 kubelet[2320]: I1216 13:06:27.194409 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:06:27.194736 kubelet[2320]: E1216 13:06:27.194696 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Dec 16 13:06:27.284789 kubelet[2320]: E1216 13:06:27.284739 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:06:27.405160 systemd[1]: Created slice kubepods-burstable-pod46518866e9b4e43ed8b9aab44f1c7c08.slice - libcontainer container kubepods-burstable-pod46518866e9b4e43ed8b9aab44f1c7c08.slice. Dec 16 13:06:27.407539 kubelet[2320]: E1216 13:06:27.407516 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:06:27.411876 kubelet[2320]: I1216 13:06:27.411829 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:27.411916 kubelet[2320]: I1216 13:06:27.411858 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:27.411916 kubelet[2320]: I1216 13:06:27.411900 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:27.411968 kubelet[2320]: I1216 13:06:27.411917 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:27.411968 kubelet[2320]: I1216 13:06:27.411965 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:27.413692 kubelet[2320]: E1216 13:06:27.413663 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:27.414425 containerd[1547]: time="2025-12-16T13:06:27.414385748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46518866e9b4e43ed8b9aab44f1c7c08,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:27.507644 kubelet[2320]: E1216 13:06:27.507489 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="1.6s" Dec 16 13:06:27.636921 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 13:06:27.638587 kubelet[2320]: E1216 13:06:27.638555 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:27.639189 containerd[1547]: time="2025-12-16T13:06:27.639144491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:27.683818 kubelet[2320]: E1216 13:06:27.683767 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:06:27.714495 kubelet[2320]: I1216 13:06:27.714455 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:06:27.776733 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 13:06:27.778614 kubelet[2320]: E1216 13:06:27.778576 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:27.996113 kubelet[2320]: I1216 13:06:27.996067 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:06:27.996544 kubelet[2320]: E1216 13:06:27.996470 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Dec 16 13:06:28.079957 containerd[1547]: time="2025-12-16T13:06:28.079820333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:28.121314 kubelet[2320]: E1216 13:06:28.121282 2320 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:06:28.726903 containerd[1547]: time="2025-12-16T13:06:28.726312628Z" level=info msg="connecting to shim 9c5a04744584535296f20d8e84666f0d19ce6d4da45660597f7502ea6184a610" address="unix:///run/containerd/s/5da873d3de6653b370b079740d0c840757f3d7f499bdbecdf220af5c4081bad1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:28.729892 containerd[1547]: time="2025-12-16T13:06:28.729842229Z" level=info msg="connecting to shim f8b51c39eb3354a197358b5b3acd7c06bc551230fb5167ef406a4aef32df6eeb" address="unix:///run/containerd/s/52891779da16a81373b3c414e23e16cf0d495d99a814a253de3278fc2609ade2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:28.741290 containerd[1547]: time="2025-12-16T13:06:28.741246029Z" level=info msg="connecting to shim 270a85ba5adc784b37e26cc471b1fd40e9ef92bed611bfe88e16c978bdf0814f" address="unix:///run/containerd/s/9df24c46684d7919dac26b16ff54f19fca680ec8302f235852065e43e0af72b2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:28.756088 systemd[1]: Started cri-containerd-f8b51c39eb3354a197358b5b3acd7c06bc551230fb5167ef406a4aef32df6eeb.scope - libcontainer container f8b51c39eb3354a197358b5b3acd7c06bc551230fb5167ef406a4aef32df6eeb. Dec 16 13:06:28.759269 systemd[1]: Started cri-containerd-9c5a04744584535296f20d8e84666f0d19ce6d4da45660597f7502ea6184a610.scope - libcontainer container 9c5a04744584535296f20d8e84666f0d19ce6d4da45660597f7502ea6184a610. Dec 16 13:06:28.763373 systemd[1]: Started cri-containerd-270a85ba5adc784b37e26cc471b1fd40e9ef92bed611bfe88e16c978bdf0814f.scope - libcontainer container 270a85ba5adc784b37e26cc471b1fd40e9ef92bed611bfe88e16c978bdf0814f. Dec 16 13:06:28.809723 containerd[1547]: time="2025-12-16T13:06:28.809683448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c5a04744584535296f20d8e84666f0d19ce6d4da45660597f7502ea6184a610\"" Dec 16 13:06:28.814368 containerd[1547]: time="2025-12-16T13:06:28.814340323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46518866e9b4e43ed8b9aab44f1c7c08,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8b51c39eb3354a197358b5b3acd7c06bc551230fb5167ef406a4aef32df6eeb\"" Dec 16 13:06:28.816992 containerd[1547]: time="2025-12-16T13:06:28.816965407Z" level=info msg="CreateContainer within sandbox \"9c5a04744584535296f20d8e84666f0d19ce6d4da45660597f7502ea6184a610\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:06:28.820306 containerd[1547]: time="2025-12-16T13:06:28.820281898Z" level=info msg="CreateContainer within sandbox \"f8b51c39eb3354a197358b5b3acd7c06bc551230fb5167ef406a4aef32df6eeb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:06:28.826993 containerd[1547]: time="2025-12-16T13:06:28.826972036Z" level=info msg="Container e329d43790bac7d77151c4e7c14ec5a358daa4e46a504a21893b6ef09e050ee1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:28.827966 containerd[1547]: time="2025-12-16T13:06:28.827935634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"270a85ba5adc784b37e26cc471b1fd40e9ef92bed611bfe88e16c978bdf0814f\"" Dec 16 13:06:28.832554 containerd[1547]: time="2025-12-16T13:06:28.832533678Z" level=info msg="CreateContainer within sandbox \"270a85ba5adc784b37e26cc471b1fd40e9ef92bed611bfe88e16c978bdf0814f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:06:28.835098 containerd[1547]: time="2025-12-16T13:06:28.835066870Z" level=info msg="Container 0846acfaac57f5d28fa2cbde7960e651ee1f5aed5925338a509e4753454894c4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:28.836212 containerd[1547]: time="2025-12-16T13:06:28.836182232Z" level=info msg="CreateContainer within sandbox \"9c5a04744584535296f20d8e84666f0d19ce6d4da45660597f7502ea6184a610\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e329d43790bac7d77151c4e7c14ec5a358daa4e46a504a21893b6ef09e050ee1\"" Dec 16 13:06:28.836672 containerd[1547]: time="2025-12-16T13:06:28.836644710Z" level=info msg="StartContainer for \"e329d43790bac7d77151c4e7c14ec5a358daa4e46a504a21893b6ef09e050ee1\"" Dec 16 13:06:28.837592 containerd[1547]: time="2025-12-16T13:06:28.837561329Z" level=info msg="connecting to shim e329d43790bac7d77151c4e7c14ec5a358daa4e46a504a21893b6ef09e050ee1" address="unix:///run/containerd/s/5da873d3de6653b370b079740d0c840757f3d7f499bdbecdf220af5c4081bad1" protocol=ttrpc version=3 Dec 16 13:06:28.842887 containerd[1547]: time="2025-12-16T13:06:28.842842355Z" level=info msg="Container 94481fdbb0cccf684a7a2a104dbb14f0024312868b89581e9768e111fb02a374: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:28.852639 containerd[1547]: time="2025-12-16T13:06:28.851570416Z" level=info msg="CreateContainer within sandbox \"270a85ba5adc784b37e26cc471b1fd40e9ef92bed611bfe88e16c978bdf0814f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94481fdbb0cccf684a7a2a104dbb14f0024312868b89581e9768e111fb02a374\"" Dec 16 13:06:28.852639 containerd[1547]: time="2025-12-16T13:06:28.852057189Z" level=info msg="StartContainer for \"94481fdbb0cccf684a7a2a104dbb14f0024312868b89581e9768e111fb02a374\"" Dec 16 13:06:28.853522 containerd[1547]: time="2025-12-16T13:06:28.853502750Z" level=info msg="CreateContainer within sandbox \"f8b51c39eb3354a197358b5b3acd7c06bc551230fb5167ef406a4aef32df6eeb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0846acfaac57f5d28fa2cbde7960e651ee1f5aed5925338a509e4753454894c4\"" Dec 16 13:06:28.853730 containerd[1547]: time="2025-12-16T13:06:28.853714468Z" level=info msg="StartContainer for \"0846acfaac57f5d28fa2cbde7960e651ee1f5aed5925338a509e4753454894c4\"" Dec 16 13:06:28.854518 containerd[1547]: time="2025-12-16T13:06:28.854140216Z" level=info msg="connecting to shim 94481fdbb0cccf684a7a2a104dbb14f0024312868b89581e9768e111fb02a374" address="unix:///run/containerd/s/9df24c46684d7919dac26b16ff54f19fca680ec8302f235852065e43e0af72b2" protocol=ttrpc version=3 Dec 16 13:06:28.854563 containerd[1547]: time="2025-12-16T13:06:28.854547079Z" level=info msg="connecting to shim 0846acfaac57f5d28fa2cbde7960e651ee1f5aed5925338a509e4753454894c4" address="unix:///run/containerd/s/52891779da16a81373b3c414e23e16cf0d495d99a814a253de3278fc2609ade2" protocol=ttrpc version=3 Dec 16 13:06:28.859027 systemd[1]: Started cri-containerd-e329d43790bac7d77151c4e7c14ec5a358daa4e46a504a21893b6ef09e050ee1.scope - libcontainer container e329d43790bac7d77151c4e7c14ec5a358daa4e46a504a21893b6ef09e050ee1. Dec 16 13:06:28.876003 systemd[1]: Started cri-containerd-94481fdbb0cccf684a7a2a104dbb14f0024312868b89581e9768e111fb02a374.scope - libcontainer container 94481fdbb0cccf684a7a2a104dbb14f0024312868b89581e9768e111fb02a374. Dec 16 13:06:28.879657 systemd[1]: Started cri-containerd-0846acfaac57f5d28fa2cbde7960e651ee1f5aed5925338a509e4753454894c4.scope - libcontainer container 0846acfaac57f5d28fa2cbde7960e651ee1f5aed5925338a509e4753454894c4. Dec 16 13:06:28.929949 containerd[1547]: time="2025-12-16T13:06:28.929579549Z" level=info msg="StartContainer for \"e329d43790bac7d77151c4e7c14ec5a358daa4e46a504a21893b6ef09e050ee1\" returns successfully" Dec 16 13:06:28.930800 containerd[1547]: time="2025-12-16T13:06:28.930782766Z" level=info msg="StartContainer for \"94481fdbb0cccf684a7a2a104dbb14f0024312868b89581e9768e111fb02a374\" returns successfully" Dec 16 13:06:28.937080 containerd[1547]: time="2025-12-16T13:06:28.937028661Z" level=info msg="StartContainer for \"0846acfaac57f5d28fa2cbde7960e651ee1f5aed5925338a509e4753454894c4\" returns successfully" Dec 16 13:06:29.132657 kubelet[2320]: E1216 13:06:29.132630 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:29.137722 kubelet[2320]: E1216 13:06:29.137697 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:29.139167 kubelet[2320]: E1216 13:06:29.139144 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:29.598269 kubelet[2320]: I1216 13:06:29.598240 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:06:30.144559 kubelet[2320]: E1216 13:06:30.143560 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:30.144559 kubelet[2320]: E1216 13:06:30.143893 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:06:30.176104 kubelet[2320]: E1216 13:06:30.176054 2320 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 13:06:30.252906 kubelet[2320]: I1216 13:06:30.252836 2320 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:06:30.252906 kubelet[2320]: E1216 13:06:30.252886 2320 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 13:06:30.305040 kubelet[2320]: I1216 13:06:30.304989 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:06:30.311592 kubelet[2320]: E1216 13:06:30.311447 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 13:06:30.311592 kubelet[2320]: I1216 13:06:30.311476 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:30.313251 kubelet[2320]: E1216 13:06:30.313224 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:30.313251 kubelet[2320]: I1216 13:06:30.313247 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:30.314909 kubelet[2320]: E1216 13:06:30.314442 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:31.089129 kubelet[2320]: I1216 13:06:31.089089 2320 apiserver.go:52] "Watching apiserver" Dec 16 13:06:31.104302 kubelet[2320]: I1216 13:06:31.104261 2320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:06:31.142150 kubelet[2320]: I1216 13:06:31.141979 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:32.170703 systemd[1]: Reload requested from client PID 2606 ('systemctl') (unit session-7.scope)... Dec 16 13:06:32.170718 systemd[1]: Reloading... Dec 16 13:06:32.247902 zram_generator::config[2652]: No configuration found. Dec 16 13:06:32.480074 systemd[1]: Reloading finished in 309 ms. Dec 16 13:06:32.510009 kubelet[2320]: I1216 13:06:32.509981 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:32.516702 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:32.526906 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:06:32.527192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:32.527243 systemd[1]: kubelet.service: Consumed 831ms CPU time, 131.5M memory peak. Dec 16 13:06:32.528983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:32.767383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:32.772234 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:06:32.807528 kubelet[2694]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:32.807528 kubelet[2694]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:06:32.807528 kubelet[2694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:32.807986 kubelet[2694]: I1216 13:06:32.807592 2694 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:06:32.815911 kubelet[2694]: I1216 13:06:32.815884 2694 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:06:32.815911 kubelet[2694]: I1216 13:06:32.815907 2694 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:06:32.816151 kubelet[2694]: I1216 13:06:32.816129 2694 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:06:32.817214 kubelet[2694]: I1216 13:06:32.817191 2694 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:06:32.819332 kubelet[2694]: I1216 13:06:32.819297 2694 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:06:32.823342 kubelet[2694]: I1216 13:06:32.823317 2694 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:06:32.828652 kubelet[2694]: I1216 13:06:32.828630 2694 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:06:32.828892 kubelet[2694]: I1216 13:06:32.828843 2694 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:06:32.829048 kubelet[2694]: I1216 13:06:32.828887 2694 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:06:32.829118 kubelet[2694]: I1216 13:06:32.829056 2694 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:06:32.829118 kubelet[2694]: I1216 13:06:32.829065 2694 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:06:32.829118 kubelet[2694]: I1216 13:06:32.829111 2694 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:32.829270 kubelet[2694]: I1216 13:06:32.829256 2694 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:06:32.829311 kubelet[2694]: I1216 13:06:32.829272 2694 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:06:32.829337 kubelet[2694]: I1216 13:06:32.829311 2694 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:06:32.829337 kubelet[2694]: I1216 13:06:32.829326 2694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:06:32.831392 kubelet[2694]: I1216 13:06:32.831189 2694 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:06:32.831680 kubelet[2694]: I1216 13:06:32.831664 2694 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:06:32.836601 kubelet[2694]: I1216 13:06:32.836585 2694 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:06:32.836681 kubelet[2694]: I1216 13:06:32.836673 2694 server.go:1289] "Started kubelet" Dec 16 13:06:32.838316 kubelet[2694]: I1216 13:06:32.838303 2694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:06:32.838442 kubelet[2694]: I1216 13:06:32.838387 2694 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:06:32.838986 kubelet[2694]: I1216 13:06:32.838907 2694 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:06:32.839233 kubelet[2694]: I1216 13:06:32.839213 2694 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:06:32.839268 kubelet[2694]: I1216 13:06:32.839240 2694 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:06:32.841158 kubelet[2694]: E1216 13:06:32.841073 2694 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:06:32.841511 kubelet[2694]: I1216 13:06:32.841364 2694 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:06:32.842666 kubelet[2694]: E1216 13:06:32.842639 2694 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:06:32.842714 kubelet[2694]: I1216 13:06:32.842672 2694 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:06:32.842838 kubelet[2694]: I1216 13:06:32.842817 2694 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:06:32.843068 kubelet[2694]: I1216 13:06:32.843057 2694 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:06:32.843932 kubelet[2694]: I1216 13:06:32.843640 2694 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:06:32.843932 kubelet[2694]: I1216 13:06:32.843736 2694 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:06:32.847174 kubelet[2694]: I1216 13:06:32.847146 2694 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:06:32.855230 kubelet[2694]: I1216 13:06:32.855194 2694 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:06:32.856374 kubelet[2694]: I1216 13:06:32.856349 2694 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:06:32.856374 kubelet[2694]: I1216 13:06:32.856366 2694 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:06:32.856440 kubelet[2694]: I1216 13:06:32.856391 2694 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:06:32.856440 kubelet[2694]: I1216 13:06:32.856399 2694 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:06:32.856486 kubelet[2694]: E1216 13:06:32.856439 2694 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:06:32.886065 kubelet[2694]: I1216 13:06:32.886033 2694 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:06:32.886065 kubelet[2694]: I1216 13:06:32.886053 2694 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:06:32.886065 kubelet[2694]: I1216 13:06:32.886070 2694 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:32.886240 kubelet[2694]: I1216 13:06:32.886186 2694 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:06:32.886240 kubelet[2694]: I1216 13:06:32.886195 2694 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:06:32.886240 kubelet[2694]: I1216 13:06:32.886212 2694 policy_none.go:49] "None policy: Start" Dec 16 13:06:32.886240 kubelet[2694]: I1216 13:06:32.886220 2694 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:06:32.886240 kubelet[2694]: I1216 13:06:32.886229 2694 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:06:32.886367 kubelet[2694]: I1216 13:06:32.886317 2694 state_mem.go:75] "Updated machine memory state" Dec 16 13:06:32.890512 kubelet[2694]: E1216 13:06:32.890094 2694 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:06:32.890512 kubelet[2694]: I1216 13:06:32.890260 2694 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:06:32.890512 kubelet[2694]: I1216 13:06:32.890269 2694 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:06:32.890512 kubelet[2694]: I1216 13:06:32.890463 2694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:06:32.891937 kubelet[2694]: E1216 13:06:32.891913 2694 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:06:32.957436 kubelet[2694]: I1216 13:06:32.957382 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:32.957568 kubelet[2694]: I1216 13:06:32.957474 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:06:32.957632 kubelet[2694]: I1216 13:06:32.957588 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:32.962673 kubelet[2694]: E1216 13:06:32.962634 2694 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:32.962673 kubelet[2694]: E1216 13:06:32.962668 2694 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:32.991884 kubelet[2694]: I1216 13:06:32.991851 2694 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:06:33.000237 kubelet[2694]: I1216 13:06:33.000213 2694 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 13:06:33.000380 kubelet[2694]: I1216 13:06:33.000268 2694 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:06:33.044250 kubelet[2694]: I1216 13:06:33.044154 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46518866e9b4e43ed8b9aab44f1c7c08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46518866e9b4e43ed8b9aab44f1c7c08\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:33.044250 kubelet[2694]: I1216 13:06:33.044182 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:33.044250 kubelet[2694]: I1216 13:06:33.044200 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:33.044250 kubelet[2694]: I1216 13:06:33.044218 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:33.044250 kubelet[2694]: I1216 13:06:33.044234 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:06:33.044454 kubelet[2694]: I1216 13:06:33.044248 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46518866e9b4e43ed8b9aab44f1c7c08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46518866e9b4e43ed8b9aab44f1c7c08\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:33.044454 kubelet[2694]: I1216 13:06:33.044299 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46518866e9b4e43ed8b9aab44f1c7c08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46518866e9b4e43ed8b9aab44f1c7c08\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:33.044454 kubelet[2694]: I1216 13:06:33.044345 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:33.044454 kubelet[2694]: I1216 13:06:33.044371 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:33.172913 sudo[2732]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:06:33.173242 sudo[2732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:06:33.459603 sudo[2732]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:33.830930 kubelet[2694]: I1216 13:06:33.830906 2694 apiserver.go:52] "Watching apiserver" Dec 16 13:06:33.843369 kubelet[2694]: I1216 13:06:33.843348 2694 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:06:33.867751 kubelet[2694]: I1216 13:06:33.867719 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:33.867857 kubelet[2694]: I1216 13:06:33.867845 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:33.873839 kubelet[2694]: E1216 13:06:33.873808 2694 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:06:33.874097 kubelet[2694]: E1216 13:06:33.874079 2694 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:06:33.883977 kubelet[2694]: I1216 13:06:33.883926 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.88391192 podStartE2EDuration="1.88391192s" podCreationTimestamp="2025-12-16 13:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:33.883366928 +0000 UTC m=+1.107334093" watchObservedRunningTime="2025-12-16 13:06:33.88391192 +0000 UTC m=+1.107879085" Dec 16 13:06:33.896891 kubelet[2694]: I1216 13:06:33.896782 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.896746875 podStartE2EDuration="1.896746875s" podCreationTimestamp="2025-12-16 13:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:33.891264852 +0000 UTC m=+1.115232017" watchObservedRunningTime="2025-12-16 13:06:33.896746875 +0000 UTC m=+1.120714030" Dec 16 13:06:33.902390 kubelet[2694]: I1216 13:06:33.902355 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.902348171 podStartE2EDuration="2.902348171s" podCreationTimestamp="2025-12-16 13:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:33.896966998 +0000 UTC m=+1.120934153" watchObservedRunningTime="2025-12-16 13:06:33.902348171 +0000 UTC m=+1.126315336" Dec 16 13:06:34.908590 sudo[1756]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:34.910607 sshd[1755]: Connection closed by 10.0.0.1 port 32812 Dec 16 13:06:34.911042 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:34.914814 systemd[1]: sshd@6-10.0.0.87:22-10.0.0.1:32812.service: Deactivated successfully. Dec 16 13:06:34.916970 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:06:34.917177 systemd[1]: session-7.scope: Consumed 4.888s CPU time, 258.7M memory peak. Dec 16 13:06:34.919036 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:06:34.920348 systemd-logind[1530]: Removed session 7. Dec 16 13:06:38.821848 kubelet[2694]: I1216 13:06:38.821818 2694 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:06:38.822372 kubelet[2694]: I1216 13:06:38.822325 2694 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:06:38.822401 containerd[1547]: time="2025-12-16T13:06:38.822155017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:06:39.403626 systemd[1]: Created slice kubepods-besteffort-pod27e8190b_19ae_488a_8a58_acd57e0766ac.slice - libcontainer container kubepods-besteffort-pod27e8190b_19ae_488a_8a58_acd57e0766ac.slice. Dec 16 13:06:39.420600 systemd[1]: Created slice kubepods-burstable-pod5de33345_e876_467f_b67c_beadd8290182.slice - libcontainer container kubepods-burstable-pod5de33345_e876_467f_b67c_beadd8290182.slice. Dec 16 13:06:39.487768 kubelet[2694]: I1216 13:06:39.487718 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-run\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.487768 kubelet[2694]: I1216 13:06:39.487774 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-cgroup\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.487960 kubelet[2694]: I1216 13:06:39.487793 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cni-path\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.487960 kubelet[2694]: I1216 13:06:39.487809 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5de33345-e876-467f-b67c-beadd8290182-clustermesh-secrets\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.487960 kubelet[2694]: I1216 13:06:39.487824 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27e8190b-19ae-488a-8a58-acd57e0766ac-lib-modules\") pod \"kube-proxy-jnhxb\" (UID: \"27e8190b-19ae-488a-8a58-acd57e0766ac\") " pod="kube-system/kube-proxy-jnhxb" Dec 16 13:06:39.487960 kubelet[2694]: I1216 13:06:39.487837 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-hostproc\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.487960 kubelet[2694]: I1216 13:06:39.487850 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-etc-cni-netd\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.487960 kubelet[2694]: I1216 13:06:39.487901 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-xtables-lock\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488109 kubelet[2694]: I1216 13:06:39.487940 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-kernel\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488109 kubelet[2694]: I1216 13:06:39.488009 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-hubble-tls\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488109 kubelet[2694]: I1216 13:06:39.488025 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27e8190b-19ae-488a-8a58-acd57e0766ac-xtables-lock\") pod \"kube-proxy-jnhxb\" (UID: \"27e8190b-19ae-488a-8a58-acd57e0766ac\") " pod="kube-system/kube-proxy-jnhxb" Dec 16 13:06:39.488109 kubelet[2694]: I1216 13:06:39.488038 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-net\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488109 kubelet[2694]: I1216 13:06:39.488062 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw598\" (UniqueName: \"kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-kube-api-access-cw598\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488227 kubelet[2694]: I1216 13:06:39.488077 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27e8190b-19ae-488a-8a58-acd57e0766ac-kube-proxy\") pod \"kube-proxy-jnhxb\" (UID: \"27e8190b-19ae-488a-8a58-acd57e0766ac\") " pod="kube-system/kube-proxy-jnhxb" Dec 16 13:06:39.488227 kubelet[2694]: I1216 13:06:39.488102 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-bpf-maps\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488227 kubelet[2694]: I1216 13:06:39.488120 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-lib-modules\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488227 kubelet[2694]: I1216 13:06:39.488139 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5de33345-e876-467f-b67c-beadd8290182-cilium-config-path\") pod \"cilium-gk85x\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " pod="kube-system/cilium-gk85x" Dec 16 13:06:39.488227 kubelet[2694]: I1216 13:06:39.488167 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfr6\" (UniqueName: \"kubernetes.io/projected/27e8190b-19ae-488a-8a58-acd57e0766ac-kube-api-access-vrfr6\") pod \"kube-proxy-jnhxb\" (UID: \"27e8190b-19ae-488a-8a58-acd57e0766ac\") " pod="kube-system/kube-proxy-jnhxb" Dec 16 13:06:39.719199 containerd[1547]: time="2025-12-16T13:06:39.719049614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnhxb,Uid:27e8190b-19ae-488a-8a58-acd57e0766ac,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:39.723751 containerd[1547]: time="2025-12-16T13:06:39.723713238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gk85x,Uid:5de33345-e876-467f-b67c-beadd8290182,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:39.762688 containerd[1547]: time="2025-12-16T13:06:39.762644699Z" level=info msg="connecting to shim bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79" address="unix:///run/containerd/s/727f1ef073063c9a452ef7040899e63158a7393c0bc692cce8f53064198c6d94" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:39.763213 containerd[1547]: time="2025-12-16T13:06:39.763115611Z" level=info msg="connecting to shim f75a06b6460064e6cf650939d62fffb4b698e3df6cf182329fe096fc28ac92e4" address="unix:///run/containerd/s/9367f720da46a24736c2a2bd8d782dcfb17325ca819462731bd8cdeeafa04dc7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:39.811014 systemd[1]: Started cri-containerd-f75a06b6460064e6cf650939d62fffb4b698e3df6cf182329fe096fc28ac92e4.scope - libcontainer container f75a06b6460064e6cf650939d62fffb4b698e3df6cf182329fe096fc28ac92e4. Dec 16 13:06:39.814329 systemd[1]: Started cri-containerd-bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79.scope - libcontainer container bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79. Dec 16 13:06:39.846797 containerd[1547]: time="2025-12-16T13:06:39.846744359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gk85x,Uid:5de33345-e876-467f-b67c-beadd8290182,Namespace:kube-system,Attempt:0,} returns sandbox id \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\"" Dec 16 13:06:39.848632 containerd[1547]: time="2025-12-16T13:06:39.848611676Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:06:39.856064 containerd[1547]: time="2025-12-16T13:06:39.856014029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnhxb,Uid:27e8190b-19ae-488a-8a58-acd57e0766ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f75a06b6460064e6cf650939d62fffb4b698e3df6cf182329fe096fc28ac92e4\"" Dec 16 13:06:39.861042 containerd[1547]: time="2025-12-16T13:06:39.861008233Z" level=info msg="CreateContainer within sandbox \"f75a06b6460064e6cf650939d62fffb4b698e3df6cf182329fe096fc28ac92e4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:06:39.874919 containerd[1547]: time="2025-12-16T13:06:39.874854273Z" level=info msg="Container 42ea66569a349764c749c6ecfc14c50412f93c58f992c1596c7e0724ba9ad4a0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:39.884139 containerd[1547]: time="2025-12-16T13:06:39.884098806Z" level=info msg="CreateContainer within sandbox \"f75a06b6460064e6cf650939d62fffb4b698e3df6cf182329fe096fc28ac92e4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42ea66569a349764c749c6ecfc14c50412f93c58f992c1596c7e0724ba9ad4a0\"" Dec 16 13:06:39.885143 containerd[1547]: time="2025-12-16T13:06:39.884894086Z" level=info msg="StartContainer for \"42ea66569a349764c749c6ecfc14c50412f93c58f992c1596c7e0724ba9ad4a0\"" Dec 16 13:06:39.887478 containerd[1547]: time="2025-12-16T13:06:39.887450794Z" level=info msg="connecting to shim 42ea66569a349764c749c6ecfc14c50412f93c58f992c1596c7e0724ba9ad4a0" address="unix:///run/containerd/s/9367f720da46a24736c2a2bd8d782dcfb17325ca819462731bd8cdeeafa04dc7" protocol=ttrpc version=3 Dec 16 13:06:39.912999 systemd[1]: Started cri-containerd-42ea66569a349764c749c6ecfc14c50412f93c58f992c1596c7e0724ba9ad4a0.scope - libcontainer container 42ea66569a349764c749c6ecfc14c50412f93c58f992c1596c7e0724ba9ad4a0. Dec 16 13:06:40.012991 containerd[1547]: time="2025-12-16T13:06:40.012737340Z" level=info msg="StartContainer for \"42ea66569a349764c749c6ecfc14c50412f93c58f992c1596c7e0724ba9ad4a0\" returns successfully" Dec 16 13:06:40.016119 systemd[1]: Created slice kubepods-besteffort-pod446aa47b_5e7d_45a8_bb40_167ed9a8504f.slice - libcontainer container kubepods-besteffort-pod446aa47b_5e7d_45a8_bb40_167ed9a8504f.slice. Dec 16 13:06:40.091270 kubelet[2694]: I1216 13:06:40.091223 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7ff\" (UniqueName: \"kubernetes.io/projected/446aa47b-5e7d-45a8-bb40-167ed9a8504f-kube-api-access-vz7ff\") pod \"cilium-operator-6c4d7847fc-7cljj\" (UID: \"446aa47b-5e7d-45a8-bb40-167ed9a8504f\") " pod="kube-system/cilium-operator-6c4d7847fc-7cljj" Dec 16 13:06:40.091270 kubelet[2694]: I1216 13:06:40.091261 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/446aa47b-5e7d-45a8-bb40-167ed9a8504f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7cljj\" (UID: \"446aa47b-5e7d-45a8-bb40-167ed9a8504f\") " pod="kube-system/cilium-operator-6c4d7847fc-7cljj" Dec 16 13:06:40.322667 containerd[1547]: time="2025-12-16T13:06:40.322626553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7cljj,Uid:446aa47b-5e7d-45a8-bb40-167ed9a8504f,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:40.374455 containerd[1547]: time="2025-12-16T13:06:40.374412356Z" level=info msg="connecting to shim 0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb" address="unix:///run/containerd/s/e0403fe54caf39e278ba667ce848f3bd50cce0bd59ccf909261b86211f7405ef" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:40.399010 systemd[1]: Started cri-containerd-0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb.scope - libcontainer container 0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb. Dec 16 13:06:40.442630 containerd[1547]: time="2025-12-16T13:06:40.442575835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7cljj,Uid:446aa47b-5e7d-45a8-bb40-167ed9a8504f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\"" Dec 16 13:06:40.890230 kubelet[2694]: I1216 13:06:40.890177 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jnhxb" podStartSLOduration=1.890163872 podStartE2EDuration="1.890163872s" podCreationTimestamp="2025-12-16 13:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:40.889706365 +0000 UTC m=+8.113673530" watchObservedRunningTime="2025-12-16 13:06:40.890163872 +0000 UTC m=+8.114131037" Dec 16 13:06:41.569478 kubelet[2694]: E1216 13:06:41.569440 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:41.885606 kubelet[2694]: E1216 13:06:41.885217 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:41.886138 kubelet[2694]: E1216 13:06:41.886118 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:42.887121 kubelet[2694]: E1216 13:06:42.887084 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:44.690720 kubelet[2694]: E1216 13:06:44.690686 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:44.890622 kubelet[2694]: E1216 13:06:44.890587 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:47.500782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3151214186.mount: Deactivated successfully. Dec 16 13:06:49.104181 update_engine[1534]: I20251216 13:06:49.104118 1534 update_attempter.cc:509] Updating boot flags... Dec 16 13:06:50.389803 containerd[1547]: time="2025-12-16T13:06:50.389724795Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:50.436290 containerd[1547]: time="2025-12-16T13:06:50.436202166Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:06:50.491162 containerd[1547]: time="2025-12-16T13:06:50.491110380Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:50.492744 containerd[1547]: time="2025-12-16T13:06:50.492701562Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.643943421s" Dec 16 13:06:50.492744 containerd[1547]: time="2025-12-16T13:06:50.492749832Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:06:50.497243 containerd[1547]: time="2025-12-16T13:06:50.497204491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:06:50.610357 containerd[1547]: time="2025-12-16T13:06:50.610311776Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:06:50.828910 containerd[1547]: time="2025-12-16T13:06:50.828484581Z" level=info msg="Container 02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:50.832217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953891140.mount: Deactivated successfully. Dec 16 13:06:50.988345 containerd[1547]: time="2025-12-16T13:06:50.988297684Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\"" Dec 16 13:06:50.988975 containerd[1547]: time="2025-12-16T13:06:50.988923618Z" level=info msg="StartContainer for \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\"" Dec 16 13:06:50.989806 containerd[1547]: time="2025-12-16T13:06:50.989783439Z" level=info msg="connecting to shim 02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc" address="unix:///run/containerd/s/727f1ef073063c9a452ef7040899e63158a7393c0bc692cce8f53064198c6d94" protocol=ttrpc version=3 Dec 16 13:06:51.014000 systemd[1]: Started cri-containerd-02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc.scope - libcontainer container 02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc. Dec 16 13:06:51.043850 containerd[1547]: time="2025-12-16T13:06:51.043812027Z" level=info msg="StartContainer for \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\" returns successfully" Dec 16 13:06:51.062368 systemd[1]: cri-containerd-02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc.scope: Deactivated successfully. Dec 16 13:06:51.062851 systemd[1]: cri-containerd-02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc.scope: Consumed 26ms CPU time, 6.8M memory peak, 3.2M written to disk. Dec 16 13:06:51.064017 containerd[1547]: time="2025-12-16T13:06:51.063972015Z" level=info msg="received container exit event container_id:\"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\" id:\"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\" pid:3139 exited_at:{seconds:1765890411 nanos:63515590}" Dec 16 13:06:51.088729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc-rootfs.mount: Deactivated successfully. Dec 16 13:06:51.902858 kubelet[2694]: E1216 13:06:51.902822 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:51.908325 containerd[1547]: time="2025-12-16T13:06:51.908292268Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:06:51.923093 containerd[1547]: time="2025-12-16T13:06:51.922755587Z" level=info msg="Container 80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:51.931003 containerd[1547]: time="2025-12-16T13:06:51.930954094Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\"" Dec 16 13:06:51.931558 containerd[1547]: time="2025-12-16T13:06:51.931533751Z" level=info msg="StartContainer for \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\"" Dec 16 13:06:51.932676 containerd[1547]: time="2025-12-16T13:06:51.932635847Z" level=info msg="connecting to shim 80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc" address="unix:///run/containerd/s/727f1ef073063c9a452ef7040899e63158a7393c0bc692cce8f53064198c6d94" protocol=ttrpc version=3 Dec 16 13:06:51.953140 systemd[1]: Started cri-containerd-80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc.scope - libcontainer container 80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc. Dec 16 13:06:52.102122 containerd[1547]: time="2025-12-16T13:06:52.102070548Z" level=info msg="StartContainer for \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\" returns successfully" Dec 16 13:06:52.165507 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:06:52.165755 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:52.166032 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:52.169110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:52.170990 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:06:52.171391 systemd[1]: cri-containerd-80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc.scope: Deactivated successfully. Dec 16 13:06:52.171881 containerd[1547]: time="2025-12-16T13:06:52.171810963Z" level=info msg="received container exit event container_id:\"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\" id:\"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\" pid:3185 exited_at:{seconds:1765890412 nanos:170744293}" Dec 16 13:06:52.224893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:52.906052 kubelet[2694]: E1216 13:06:52.906018 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:52.913148 containerd[1547]: time="2025-12-16T13:06:52.913101353Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:06:52.921031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc-rootfs.mount: Deactivated successfully. Dec 16 13:06:52.926076 containerd[1547]: time="2025-12-16T13:06:52.926023766Z" level=info msg="Container 14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:52.936468 containerd[1547]: time="2025-12-16T13:06:52.936423629Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\"" Dec 16 13:06:52.936899 containerd[1547]: time="2025-12-16T13:06:52.936854898Z" level=info msg="StartContainer for \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\"" Dec 16 13:06:52.938137 containerd[1547]: time="2025-12-16T13:06:52.938103689Z" level=info msg="connecting to shim 14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb" address="unix:///run/containerd/s/727f1ef073063c9a452ef7040899e63158a7393c0bc692cce8f53064198c6d94" protocol=ttrpc version=3 Dec 16 13:06:52.958028 systemd[1]: Started cri-containerd-14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb.scope - libcontainer container 14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb. Dec 16 13:06:53.043780 containerd[1547]: time="2025-12-16T13:06:53.043727449Z" level=info msg="StartContainer for \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\" returns successfully" Dec 16 13:06:53.044167 systemd[1]: cri-containerd-14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb.scope: Deactivated successfully. Dec 16 13:06:53.046758 containerd[1547]: time="2025-12-16T13:06:53.046725278Z" level=info msg="received container exit event container_id:\"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\" id:\"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\" pid:3241 exited_at:{seconds:1765890413 nanos:46445143}" Dec 16 13:06:53.068357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb-rootfs.mount: Deactivated successfully. Dec 16 13:06:53.910441 kubelet[2694]: E1216 13:06:53.910410 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:53.916091 containerd[1547]: time="2025-12-16T13:06:53.916048981Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:06:53.928755 containerd[1547]: time="2025-12-16T13:06:53.928710205Z" level=info msg="Container 77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:53.932240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400730859.mount: Deactivated successfully. Dec 16 13:06:53.935875 containerd[1547]: time="2025-12-16T13:06:53.935815594Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\"" Dec 16 13:06:53.936376 containerd[1547]: time="2025-12-16T13:06:53.936340638Z" level=info msg="StartContainer for \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\"" Dec 16 13:06:53.937289 containerd[1547]: time="2025-12-16T13:06:53.937265372Z" level=info msg="connecting to shim 77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8" address="unix:///run/containerd/s/727f1ef073063c9a452ef7040899e63158a7393c0bc692cce8f53064198c6d94" protocol=ttrpc version=3 Dec 16 13:06:53.965004 systemd[1]: Started cri-containerd-77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8.scope - libcontainer container 77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8. Dec 16 13:06:53.995642 systemd[1]: cri-containerd-77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8.scope: Deactivated successfully. Dec 16 13:06:53.998242 containerd[1547]: time="2025-12-16T13:06:53.998195036Z" level=info msg="received container exit event container_id:\"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\" id:\"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\" pid:3279 exited_at:{seconds:1765890413 nanos:996229802}" Dec 16 13:06:54.006911 containerd[1547]: time="2025-12-16T13:06:54.006846313Z" level=info msg="StartContainer for \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\" returns successfully" Dec 16 13:06:54.020351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8-rootfs.mount: Deactivated successfully. Dec 16 13:06:54.327193 containerd[1547]: time="2025-12-16T13:06:54.327138672Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:54.327916 containerd[1547]: time="2025-12-16T13:06:54.327860004Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:06:54.329021 containerd[1547]: time="2025-12-16T13:06:54.328984182Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:54.330120 containerd[1547]: time="2025-12-16T13:06:54.330094183Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.832855538s" Dec 16 13:06:54.330185 containerd[1547]: time="2025-12-16T13:06:54.330122576Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:06:54.334943 containerd[1547]: time="2025-12-16T13:06:54.334895152Z" level=info msg="CreateContainer within sandbox \"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:06:54.341239 containerd[1547]: time="2025-12-16T13:06:54.341201905Z" level=info msg="Container 6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:54.347759 containerd[1547]: time="2025-12-16T13:06:54.347716477Z" level=info msg="CreateContainer within sandbox \"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\"" Dec 16 13:06:54.349182 containerd[1547]: time="2025-12-16T13:06:54.348079197Z" level=info msg="StartContainer for \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\"" Dec 16 13:06:54.349182 containerd[1547]: time="2025-12-16T13:06:54.348945902Z" level=info msg="connecting to shim 6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0" address="unix:///run/containerd/s/e0403fe54caf39e278ba667ce848f3bd50cce0bd59ccf909261b86211f7405ef" protocol=ttrpc version=3 Dec 16 13:06:54.368030 systemd[1]: Started cri-containerd-6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0.scope - libcontainer container 6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0. Dec 16 13:06:54.397192 containerd[1547]: time="2025-12-16T13:06:54.397154254Z" level=info msg="StartContainer for \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" returns successfully" Dec 16 13:06:54.920563 kubelet[2694]: E1216 13:06:54.920517 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:54.931102 containerd[1547]: time="2025-12-16T13:06:54.931055696Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:06:54.932670 kubelet[2694]: E1216 13:06:54.932632 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:54.958427 kubelet[2694]: I1216 13:06:54.958360 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7cljj" podStartSLOduration=2.071259465 podStartE2EDuration="15.958345113s" podCreationTimestamp="2025-12-16 13:06:39 +0000 UTC" firstStartedPulling="2025-12-16 13:06:40.443745546 +0000 UTC m=+7.667712701" lastFinishedPulling="2025-12-16 13:06:54.330831184 +0000 UTC m=+21.554798349" observedRunningTime="2025-12-16 13:06:54.958142343 +0000 UTC m=+22.182109528" watchObservedRunningTime="2025-12-16 13:06:54.958345113 +0000 UTC m=+22.182312278" Dec 16 13:06:54.995116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015123139.mount: Deactivated successfully. Dec 16 13:06:54.995573 containerd[1547]: time="2025-12-16T13:06:54.995335054Z" level=info msg="Container 5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:55.003518 containerd[1547]: time="2025-12-16T13:06:55.003477498Z" level=info msg="CreateContainer within sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\"" Dec 16 13:06:55.004198 containerd[1547]: time="2025-12-16T13:06:55.004175477Z" level=info msg="StartContainer for \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\"" Dec 16 13:06:55.004986 containerd[1547]: time="2025-12-16T13:06:55.004936013Z" level=info msg="connecting to shim 5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36" address="unix:///run/containerd/s/727f1ef073063c9a452ef7040899e63158a7393c0bc692cce8f53064198c6d94" protocol=ttrpc version=3 Dec 16 13:06:55.029000 systemd[1]: Started cri-containerd-5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36.scope - libcontainer container 5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36. Dec 16 13:06:55.091813 containerd[1547]: time="2025-12-16T13:06:55.091759779Z" level=info msg="StartContainer for \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" returns successfully" Dec 16 13:06:55.294157 kubelet[2694]: I1216 13:06:55.294053 2694 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:06:55.449368 systemd[1]: Created slice kubepods-burstable-pod90abef5b_6fbc_49e7_984d_b4a74fb800cb.slice - libcontainer container kubepods-burstable-pod90abef5b_6fbc_49e7_984d_b4a74fb800cb.slice. Dec 16 13:06:55.456943 systemd[1]: Created slice kubepods-burstable-podfb00ee8c_df5a_49e0_8152_9d6ce185520a.slice - libcontainer container kubepods-burstable-podfb00ee8c_df5a_49e0_8152_9d6ce185520a.slice. Dec 16 13:06:55.491397 kubelet[2694]: I1216 13:06:55.491348 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsc72\" (UniqueName: \"kubernetes.io/projected/90abef5b-6fbc-49e7-984d-b4a74fb800cb-kube-api-access-bsc72\") pod \"coredns-674b8bbfcf-pw9rh\" (UID: \"90abef5b-6fbc-49e7-984d-b4a74fb800cb\") " pod="kube-system/coredns-674b8bbfcf-pw9rh" Dec 16 13:06:55.491397 kubelet[2694]: I1216 13:06:55.491382 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9jp\" (UniqueName: \"kubernetes.io/projected/fb00ee8c-df5a-49e0-8152-9d6ce185520a-kube-api-access-vn9jp\") pod \"coredns-674b8bbfcf-5sdng\" (UID: \"fb00ee8c-df5a-49e0-8152-9d6ce185520a\") " pod="kube-system/coredns-674b8bbfcf-5sdng" Dec 16 13:06:55.491397 kubelet[2694]: I1216 13:06:55.491404 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb00ee8c-df5a-49e0-8152-9d6ce185520a-config-volume\") pod \"coredns-674b8bbfcf-5sdng\" (UID: \"fb00ee8c-df5a-49e0-8152-9d6ce185520a\") " pod="kube-system/coredns-674b8bbfcf-5sdng" Dec 16 13:06:55.491582 kubelet[2694]: I1216 13:06:55.491420 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90abef5b-6fbc-49e7-984d-b4a74fb800cb-config-volume\") pod \"coredns-674b8bbfcf-pw9rh\" (UID: \"90abef5b-6fbc-49e7-984d-b4a74fb800cb\") " pod="kube-system/coredns-674b8bbfcf-pw9rh" Dec 16 13:06:55.752096 kubelet[2694]: E1216 13:06:55.752059 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:55.752950 containerd[1547]: time="2025-12-16T13:06:55.752911721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pw9rh,Uid:90abef5b-6fbc-49e7-984d-b4a74fb800cb,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:55.759235 kubelet[2694]: E1216 13:06:55.759210 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:55.759838 containerd[1547]: time="2025-12-16T13:06:55.759790557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5sdng,Uid:fb00ee8c-df5a-49e0-8152-9d6ce185520a,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:55.961169 kubelet[2694]: E1216 13:06:55.961125 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:55.961530 kubelet[2694]: E1216 13:06:55.961240 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:55.974887 kubelet[2694]: I1216 13:06:55.974731 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gk85x" podStartSLOduration=6.327306846 podStartE2EDuration="16.974717695s" podCreationTimestamp="2025-12-16 13:06:39 +0000 UTC" firstStartedPulling="2025-12-16 13:06:39.848398017 +0000 UTC m=+7.072365182" lastFinishedPulling="2025-12-16 13:06:50.495808866 +0000 UTC m=+17.719776031" observedRunningTime="2025-12-16 13:06:55.974592981 +0000 UTC m=+23.198560156" watchObservedRunningTime="2025-12-16 13:06:55.974717695 +0000 UTC m=+23.198684860" Dec 16 13:06:56.962672 kubelet[2694]: E1216 13:06:56.962632 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:57.963966 kubelet[2694]: E1216 13:06:57.963920 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:58.090322 systemd-networkd[1455]: cilium_host: Link UP Dec 16 13:06:58.090528 systemd-networkd[1455]: cilium_net: Link UP Dec 16 13:06:58.090759 systemd-networkd[1455]: cilium_net: Gained carrier Dec 16 13:06:58.090999 systemd-networkd[1455]: cilium_host: Gained carrier Dec 16 13:06:58.193353 systemd-networkd[1455]: cilium_vxlan: Link UP Dec 16 13:06:58.193367 systemd-networkd[1455]: cilium_vxlan: Gained carrier Dec 16 13:06:58.318068 systemd-networkd[1455]: cilium_host: Gained IPv6LL Dec 16 13:06:58.397899 kernel: NET: Registered PF_ALG protocol family Dec 16 13:06:58.399032 systemd-networkd[1455]: cilium_net: Gained IPv6LL Dec 16 13:06:59.023616 systemd-networkd[1455]: lxc_health: Link UP Dec 16 13:06:59.023969 systemd-networkd[1455]: lxc_health: Gained carrier Dec 16 13:06:59.304007 kernel: eth0: renamed from tmpb01e7 Dec 16 13:06:59.305535 systemd-networkd[1455]: lxc28b22ffc03ab: Link UP Dec 16 13:06:59.305888 kernel: eth0: renamed from tmp2bfd9 Dec 16 13:06:59.307729 systemd-networkd[1455]: lxc00872fc59212: Link UP Dec 16 13:06:59.308350 systemd-networkd[1455]: lxc28b22ffc03ab: Gained carrier Dec 16 13:06:59.308530 systemd-networkd[1455]: lxc00872fc59212: Gained carrier Dec 16 13:06:59.725596 kubelet[2694]: E1216 13:06:59.725285 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:59.878132 systemd-networkd[1455]: cilium_vxlan: Gained IPv6LL Dec 16 13:06:59.971158 kubelet[2694]: E1216 13:06:59.971128 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:00.198052 systemd-networkd[1455]: lxc_health: Gained IPv6LL Dec 16 13:07:00.518025 systemd-networkd[1455]: lxc00872fc59212: Gained IPv6LL Dec 16 13:07:00.524580 systemd[1]: Started sshd@7-10.0.0.87:22-10.0.0.1:46510.service - OpenSSH per-connection server daemon (10.0.0.1:46510). Dec 16 13:07:00.586392 sshd[3858]: Accepted publickey for core from 10.0.0.1 port 46510 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:00.588029 sshd-session[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:00.592231 systemd-logind[1530]: New session 8 of user core. Dec 16 13:07:00.606972 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:07:00.747344 sshd[3862]: Connection closed by 10.0.0.1 port 46510 Dec 16 13:07:00.749235 sshd-session[3858]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:00.752982 systemd[1]: sshd@7-10.0.0.87:22-10.0.0.1:46510.service: Deactivated successfully. Dec 16 13:07:00.755023 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:07:00.757022 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:07:00.758116 systemd-logind[1530]: Removed session 8. Dec 16 13:07:00.966017 systemd-networkd[1455]: lxc28b22ffc03ab: Gained IPv6LL Dec 16 13:07:00.972386 kubelet[2694]: E1216 13:07:00.972357 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:02.587403 containerd[1547]: time="2025-12-16T13:07:02.587359590Z" level=info msg="connecting to shim b01e75858e934b680c3a82b539e82b036f950a3f9f49db0984a87d8cf2a3801e" address="unix:///run/containerd/s/45159c79eab27e9e9987b2d445538df03debb5db59fa406e55ec0daf16bab129" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:02.588654 containerd[1547]: time="2025-12-16T13:07:02.588617609Z" level=info msg="connecting to shim 2bfd9f1f7fa22152bf6ae2b8ba17288dbd234364a317d443abcc3c833fc70e71" address="unix:///run/containerd/s/45b9a7936e708c13ba0a1b00a21a07563c502982a6f59a1f78c65a602a2e5245" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:02.610990 systemd[1]: Started cri-containerd-b01e75858e934b680c3a82b539e82b036f950a3f9f49db0984a87d8cf2a3801e.scope - libcontainer container b01e75858e934b680c3a82b539e82b036f950a3f9f49db0984a87d8cf2a3801e. Dec 16 13:07:02.614034 systemd[1]: Started cri-containerd-2bfd9f1f7fa22152bf6ae2b8ba17288dbd234364a317d443abcc3c833fc70e71.scope - libcontainer container 2bfd9f1f7fa22152bf6ae2b8ba17288dbd234364a317d443abcc3c833fc70e71. Dec 16 13:07:02.626002 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:07:02.627817 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:07:02.658276 containerd[1547]: time="2025-12-16T13:07:02.658217653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5sdng,Uid:fb00ee8c-df5a-49e0-8152-9d6ce185520a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b01e75858e934b680c3a82b539e82b036f950a3f9f49db0984a87d8cf2a3801e\"" Dec 16 13:07:02.661985 kubelet[2694]: E1216 13:07:02.661967 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:02.669010 containerd[1547]: time="2025-12-16T13:07:02.668514360Z" level=info msg="CreateContainer within sandbox \"b01e75858e934b680c3a82b539e82b036f950a3f9f49db0984a87d8cf2a3801e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:07:02.669169 containerd[1547]: time="2025-12-16T13:07:02.668859125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pw9rh,Uid:90abef5b-6fbc-49e7-984d-b4a74fb800cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bfd9f1f7fa22152bf6ae2b8ba17288dbd234364a317d443abcc3c833fc70e71\"" Dec 16 13:07:02.669889 kubelet[2694]: E1216 13:07:02.669841 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:02.675491 containerd[1547]: time="2025-12-16T13:07:02.675321904Z" level=info msg="CreateContainer within sandbox \"2bfd9f1f7fa22152bf6ae2b8ba17288dbd234364a317d443abcc3c833fc70e71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:07:02.689289 containerd[1547]: time="2025-12-16T13:07:02.689240943Z" level=info msg="Container 4926c0e5182215b7157c3b3ed4620c97b89ad3ab04958bd498ba2d597f363162: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:02.695853 containerd[1547]: time="2025-12-16T13:07:02.695738586Z" level=info msg="Container cb4c4c1a9034e1533d5a92882e3b03b596fd744ff7651681267d955b6b59067d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:02.701824 containerd[1547]: time="2025-12-16T13:07:02.701775817Z" level=info msg="CreateContainer within sandbox \"2bfd9f1f7fa22152bf6ae2b8ba17288dbd234364a317d443abcc3c833fc70e71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4926c0e5182215b7157c3b3ed4620c97b89ad3ab04958bd498ba2d597f363162\"" Dec 16 13:07:02.702256 containerd[1547]: time="2025-12-16T13:07:02.702222263Z" level=info msg="StartContainer for \"4926c0e5182215b7157c3b3ed4620c97b89ad3ab04958bd498ba2d597f363162\"" Dec 16 13:07:02.703013 containerd[1547]: time="2025-12-16T13:07:02.702989642Z" level=info msg="connecting to shim 4926c0e5182215b7157c3b3ed4620c97b89ad3ab04958bd498ba2d597f363162" address="unix:///run/containerd/s/45b9a7936e708c13ba0a1b00a21a07563c502982a6f59a1f78c65a602a2e5245" protocol=ttrpc version=3 Dec 16 13:07:02.704829 containerd[1547]: time="2025-12-16T13:07:02.704793755Z" level=info msg="CreateContainer within sandbox \"b01e75858e934b680c3a82b539e82b036f950a3f9f49db0984a87d8cf2a3801e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb4c4c1a9034e1533d5a92882e3b03b596fd744ff7651681267d955b6b59067d\"" Dec 16 13:07:02.705997 containerd[1547]: time="2025-12-16T13:07:02.705941878Z" level=info msg="StartContainer for \"cb4c4c1a9034e1533d5a92882e3b03b596fd744ff7651681267d955b6b59067d\"" Dec 16 13:07:02.706617 containerd[1547]: time="2025-12-16T13:07:02.706583040Z" level=info msg="connecting to shim cb4c4c1a9034e1533d5a92882e3b03b596fd744ff7651681267d955b6b59067d" address="unix:///run/containerd/s/45159c79eab27e9e9987b2d445538df03debb5db59fa406e55ec0daf16bab129" protocol=ttrpc version=3 Dec 16 13:07:02.725023 systemd[1]: Started cri-containerd-4926c0e5182215b7157c3b3ed4620c97b89ad3ab04958bd498ba2d597f363162.scope - libcontainer container 4926c0e5182215b7157c3b3ed4620c97b89ad3ab04958bd498ba2d597f363162. Dec 16 13:07:02.728119 systemd[1]: Started cri-containerd-cb4c4c1a9034e1533d5a92882e3b03b596fd744ff7651681267d955b6b59067d.scope - libcontainer container cb4c4c1a9034e1533d5a92882e3b03b596fd744ff7651681267d955b6b59067d. Dec 16 13:07:02.781617 containerd[1547]: time="2025-12-16T13:07:02.781564544Z" level=info msg="StartContainer for \"cb4c4c1a9034e1533d5a92882e3b03b596fd744ff7651681267d955b6b59067d\" returns successfully" Dec 16 13:07:02.781822 containerd[1547]: time="2025-12-16T13:07:02.781689989Z" level=info msg="StartContainer for \"4926c0e5182215b7157c3b3ed4620c97b89ad3ab04958bd498ba2d597f363162\" returns successfully" Dec 16 13:07:02.980627 kubelet[2694]: E1216 13:07:02.980098 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:02.982019 kubelet[2694]: E1216 13:07:02.981932 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:03.002660 kubelet[2694]: I1216 13:07:03.001994 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5sdng" podStartSLOduration=24.00197774 podStartE2EDuration="24.00197774s" podCreationTimestamp="2025-12-16 13:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:02.992517782 +0000 UTC m=+30.216484957" watchObservedRunningTime="2025-12-16 13:07:03.00197774 +0000 UTC m=+30.225944905" Dec 16 13:07:03.012173 kubelet[2694]: I1216 13:07:03.012108 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pw9rh" podStartSLOduration=24.012091985 podStartE2EDuration="24.012091985s" podCreationTimestamp="2025-12-16 13:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:03.003159496 +0000 UTC m=+30.227126671" watchObservedRunningTime="2025-12-16 13:07:03.012091985 +0000 UTC m=+30.236059150" Dec 16 13:07:03.581166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674556572.mount: Deactivated successfully. Dec 16 13:07:03.983816 kubelet[2694]: E1216 13:07:03.983680 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:03.984354 kubelet[2694]: E1216 13:07:03.983919 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:04.985838 kubelet[2694]: E1216 13:07:04.985750 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:04.986285 kubelet[2694]: E1216 13:07:04.986003 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:05.769993 systemd[1]: Started sshd@8-10.0.0.87:22-10.0.0.1:54650.service - OpenSSH per-connection server daemon (10.0.0.1:54650). Dec 16 13:07:05.816671 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 54650 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:05.818142 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:05.822255 systemd-logind[1530]: New session 9 of user core. Dec 16 13:07:05.833053 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:07:05.965365 sshd[4059]: Connection closed by 10.0.0.1 port 54650 Dec 16 13:07:05.965765 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:05.970239 systemd[1]: sshd@8-10.0.0.87:22-10.0.0.1:54650.service: Deactivated successfully. Dec 16 13:07:05.972244 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:07:05.973108 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:07:05.974178 systemd-logind[1530]: Removed session 9. Dec 16 13:07:10.977160 systemd[1]: Started sshd@9-10.0.0.87:22-10.0.0.1:54654.service - OpenSSH per-connection server daemon (10.0.0.1:54654). Dec 16 13:07:11.033827 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 54654 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:11.035745 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:11.040708 systemd-logind[1530]: New session 10 of user core. Dec 16 13:07:11.056156 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:07:11.161104 sshd[4079]: Connection closed by 10.0.0.1 port 54654 Dec 16 13:07:11.161421 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:11.165631 systemd[1]: sshd@9-10.0.0.87:22-10.0.0.1:54654.service: Deactivated successfully. Dec 16 13:07:11.167438 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:07:11.168337 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:07:11.169589 systemd-logind[1530]: Removed session 10. Dec 16 13:07:16.181323 systemd[1]: Started sshd@10-10.0.0.87:22-10.0.0.1:40338.service - OpenSSH per-connection server daemon (10.0.0.1:40338). Dec 16 13:07:16.235685 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 40338 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:16.236948 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:16.241564 systemd-logind[1530]: New session 11 of user core. Dec 16 13:07:16.249128 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:07:16.357609 sshd[4096]: Connection closed by 10.0.0.1 port 40338 Dec 16 13:07:16.357984 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:16.367003 systemd[1]: sshd@10-10.0.0.87:22-10.0.0.1:40338.service: Deactivated successfully. Dec 16 13:07:16.369036 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:07:16.369947 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:07:16.372638 systemd[1]: Started sshd@11-10.0.0.87:22-10.0.0.1:40348.service - OpenSSH per-connection server daemon (10.0.0.1:40348). Dec 16 13:07:16.373679 systemd-logind[1530]: Removed session 11. Dec 16 13:07:16.426313 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 40348 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:16.427593 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:16.431844 systemd-logind[1530]: New session 12 of user core. Dec 16 13:07:16.445997 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:07:16.581592 sshd[4113]: Connection closed by 10.0.0.1 port 40348 Dec 16 13:07:16.581960 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:16.590687 systemd[1]: sshd@11-10.0.0.87:22-10.0.0.1:40348.service: Deactivated successfully. Dec 16 13:07:16.593066 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:07:16.595944 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:07:16.597816 systemd[1]: Started sshd@12-10.0.0.87:22-10.0.0.1:40356.service - OpenSSH per-connection server daemon (10.0.0.1:40356). Dec 16 13:07:16.599615 systemd-logind[1530]: Removed session 12. Dec 16 13:07:16.652721 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 40356 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:16.654305 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:16.658535 systemd-logind[1530]: New session 13 of user core. Dec 16 13:07:16.672059 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:07:16.782191 sshd[4128]: Connection closed by 10.0.0.1 port 40356 Dec 16 13:07:16.782456 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:16.786592 systemd[1]: sshd@12-10.0.0.87:22-10.0.0.1:40356.service: Deactivated successfully. Dec 16 13:07:16.788633 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:07:16.789340 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:07:16.790338 systemd-logind[1530]: Removed session 13. Dec 16 13:07:21.800577 systemd[1]: Started sshd@13-10.0.0.87:22-10.0.0.1:40368.service - OpenSSH per-connection server daemon (10.0.0.1:40368). Dec 16 13:07:21.857529 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 40368 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:21.859741 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:21.864663 systemd-logind[1530]: New session 14 of user core. Dec 16 13:07:21.874070 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:07:21.977802 sshd[4146]: Connection closed by 10.0.0.1 port 40368 Dec 16 13:07:21.978166 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:21.982556 systemd[1]: sshd@13-10.0.0.87:22-10.0.0.1:40368.service: Deactivated successfully. Dec 16 13:07:21.985911 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:07:21.988107 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:07:21.989970 systemd-logind[1530]: Removed session 14. Dec 16 13:07:26.989627 systemd[1]: Started sshd@14-10.0.0.87:22-10.0.0.1:51150.service - OpenSSH per-connection server daemon (10.0.0.1:51150). Dec 16 13:07:27.030499 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 51150 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:27.031834 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:27.035665 systemd-logind[1530]: New session 15 of user core. Dec 16 13:07:27.042976 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:07:27.144915 sshd[4163]: Connection closed by 10.0.0.1 port 51150 Dec 16 13:07:27.145368 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:27.153602 systemd[1]: sshd@14-10.0.0.87:22-10.0.0.1:51150.service: Deactivated successfully. Dec 16 13:07:27.155456 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:07:27.156298 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:07:27.159075 systemd[1]: Started sshd@15-10.0.0.87:22-10.0.0.1:51162.service - OpenSSH per-connection server daemon (10.0.0.1:51162). Dec 16 13:07:27.159955 systemd-logind[1530]: Removed session 15. Dec 16 13:07:27.208976 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 51162 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:27.210179 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:27.214282 systemd-logind[1530]: New session 16 of user core. Dec 16 13:07:27.227995 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:07:27.398989 sshd[4179]: Connection closed by 10.0.0.1 port 51162 Dec 16 13:07:27.399334 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:27.410730 systemd[1]: sshd@15-10.0.0.87:22-10.0.0.1:51162.service: Deactivated successfully. Dec 16 13:07:27.412656 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:07:27.413414 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:07:27.416339 systemd[1]: Started sshd@16-10.0.0.87:22-10.0.0.1:51168.service - OpenSSH per-connection server daemon (10.0.0.1:51168). Dec 16 13:07:27.416900 systemd-logind[1530]: Removed session 16. Dec 16 13:07:27.468441 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 51168 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:27.469698 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:27.473719 systemd-logind[1530]: New session 17 of user core. Dec 16 13:07:27.492982 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:07:28.056492 sshd[4194]: Connection closed by 10.0.0.1 port 51168 Dec 16 13:07:28.058540 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:28.067882 systemd[1]: sshd@16-10.0.0.87:22-10.0.0.1:51168.service: Deactivated successfully. Dec 16 13:07:28.070947 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:07:28.073310 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:07:28.076755 systemd[1]: Started sshd@17-10.0.0.87:22-10.0.0.1:51170.service - OpenSSH per-connection server daemon (10.0.0.1:51170). Dec 16 13:07:28.077908 systemd-logind[1530]: Removed session 17. Dec 16 13:07:28.123460 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 51170 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:28.124648 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:28.128539 systemd-logind[1530]: New session 18 of user core. Dec 16 13:07:28.137016 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:07:28.346592 sshd[4215]: Connection closed by 10.0.0.1 port 51170 Dec 16 13:07:28.347066 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:28.357993 systemd[1]: sshd@17-10.0.0.87:22-10.0.0.1:51170.service: Deactivated successfully. Dec 16 13:07:28.360405 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:07:28.361314 systemd-logind[1530]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:07:28.364639 systemd[1]: Started sshd@18-10.0.0.87:22-10.0.0.1:51184.service - OpenSSH per-connection server daemon (10.0.0.1:51184). Dec 16 13:07:28.365354 systemd-logind[1530]: Removed session 18. Dec 16 13:07:28.414465 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 51184 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:28.416286 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:28.420248 systemd-logind[1530]: New session 19 of user core. Dec 16 13:07:28.431062 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:07:28.536550 sshd[4229]: Connection closed by 10.0.0.1 port 51184 Dec 16 13:07:28.536934 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:28.540994 systemd[1]: sshd@18-10.0.0.87:22-10.0.0.1:51184.service: Deactivated successfully. Dec 16 13:07:28.543033 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:07:28.543924 systemd-logind[1530]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:07:28.545508 systemd-logind[1530]: Removed session 19. Dec 16 13:07:33.552902 systemd[1]: Started sshd@19-10.0.0.87:22-10.0.0.1:57766.service - OpenSSH per-connection server daemon (10.0.0.1:57766). Dec 16 13:07:33.611114 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 57766 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:33.612898 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:33.617248 systemd-logind[1530]: New session 20 of user core. Dec 16 13:07:33.621990 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:07:33.724129 sshd[4249]: Connection closed by 10.0.0.1 port 57766 Dec 16 13:07:33.724408 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:33.728844 systemd[1]: sshd@19-10.0.0.87:22-10.0.0.1:57766.service: Deactivated successfully. Dec 16 13:07:33.730803 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:07:33.731773 systemd-logind[1530]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:07:33.733561 systemd-logind[1530]: Removed session 20. Dec 16 13:07:38.736791 systemd[1]: Started sshd@20-10.0.0.87:22-10.0.0.1:57780.service - OpenSSH per-connection server daemon (10.0.0.1:57780). Dec 16 13:07:38.790109 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 57780 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:38.791321 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:38.795328 systemd-logind[1530]: New session 21 of user core. Dec 16 13:07:38.806013 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:07:38.915979 sshd[4265]: Connection closed by 10.0.0.1 port 57780 Dec 16 13:07:38.916327 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:38.921111 systemd[1]: sshd@20-10.0.0.87:22-10.0.0.1:57780.service: Deactivated successfully. Dec 16 13:07:38.923274 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:07:38.924246 systemd-logind[1530]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:07:38.925660 systemd-logind[1530]: Removed session 21. Dec 16 13:07:42.857897 kubelet[2694]: E1216 13:07:42.857799 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:43.928397 systemd[1]: Started sshd@21-10.0.0.87:22-10.0.0.1:58038.service - OpenSSH per-connection server daemon (10.0.0.1:58038). Dec 16 13:07:43.986537 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 58038 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:43.987703 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:43.991422 systemd-logind[1530]: New session 22 of user core. Dec 16 13:07:44.001988 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:07:44.104406 sshd[4283]: Connection closed by 10.0.0.1 port 58038 Dec 16 13:07:44.104765 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:44.113212 systemd[1]: sshd@21-10.0.0.87:22-10.0.0.1:58038.service: Deactivated successfully. Dec 16 13:07:44.114823 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:07:44.115591 systemd-logind[1530]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:07:44.117924 systemd[1]: Started sshd@22-10.0.0.87:22-10.0.0.1:58040.service - OpenSSH per-connection server daemon (10.0.0.1:58040). Dec 16 13:07:44.118625 systemd-logind[1530]: Removed session 22. Dec 16 13:07:44.173886 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 58040 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:44.175095 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:44.179052 systemd-logind[1530]: New session 23 of user core. Dec 16 13:07:44.186986 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:07:45.518335 containerd[1547]: time="2025-12-16T13:07:45.518280489Z" level=info msg="StopContainer for \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" with timeout 30 (s)" Dec 16 13:07:45.522889 containerd[1547]: time="2025-12-16T13:07:45.522012030Z" level=info msg="Stop container \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" with signal terminated" Dec 16 13:07:45.556588 systemd[1]: cri-containerd-6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0.scope: Deactivated successfully. Dec 16 13:07:45.558149 containerd[1547]: time="2025-12-16T13:07:45.558106851Z" level=info msg="received container exit event container_id:\"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" id:\"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" pid:3325 exited_at:{seconds:1765890465 nanos:557780863}" Dec 16 13:07:45.577015 containerd[1547]: time="2025-12-16T13:07:45.576929714Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:07:45.578229 containerd[1547]: time="2025-12-16T13:07:45.578204784Z" level=info msg="StopContainer for \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" with timeout 2 (s)" Dec 16 13:07:45.578450 containerd[1547]: time="2025-12-16T13:07:45.578430500Z" level=info msg="Stop container \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" with signal terminated" Dec 16 13:07:45.584270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0-rootfs.mount: Deactivated successfully. Dec 16 13:07:45.588633 systemd-networkd[1455]: lxc_health: Link DOWN Dec 16 13:07:45.588640 systemd-networkd[1455]: lxc_health: Lost carrier Dec 16 13:07:45.601240 containerd[1547]: time="2025-12-16T13:07:45.601202532Z" level=info msg="StopContainer for \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" returns successfully" Dec 16 13:07:45.603588 containerd[1547]: time="2025-12-16T13:07:45.603542298Z" level=info msg="StopPodSandbox for \"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\"" Dec 16 13:07:45.608932 containerd[1547]: time="2025-12-16T13:07:45.608899095Z" level=info msg="Container to stop \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:07:45.611270 systemd[1]: cri-containerd-5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36.scope: Deactivated successfully. Dec 16 13:07:45.611631 systemd[1]: cri-containerd-5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36.scope: Consumed 6.251s CPU time, 124.8M memory peak, 204K read from disk, 13.3M written to disk. Dec 16 13:07:45.615784 systemd[1]: cri-containerd-0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb.scope: Deactivated successfully. Dec 16 13:07:45.617020 containerd[1547]: time="2025-12-16T13:07:45.616982363Z" level=info msg="received sandbox exit event container_id:\"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\" id:\"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\" exit_status:137 exited_at:{seconds:1765890465 nanos:616718763}" monitor_name=podsandbox Dec 16 13:07:45.617694 containerd[1547]: time="2025-12-16T13:07:45.617667996Z" level=info msg="received container exit event container_id:\"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" id:\"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" pid:3358 exited_at:{seconds:1765890465 nanos:617543535}" Dec 16 13:07:45.640236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb-rootfs.mount: Deactivated successfully. Dec 16 13:07:45.642526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36-rootfs.mount: Deactivated successfully. Dec 16 13:07:45.645069 containerd[1547]: time="2025-12-16T13:07:45.645016409Z" level=info msg="shim disconnected" id=0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb namespace=k8s.io Dec 16 13:07:45.645069 containerd[1547]: time="2025-12-16T13:07:45.645050375Z" level=warning msg="cleaning up after shim disconnected" id=0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb namespace=k8s.io Dec 16 13:07:45.645177 containerd[1547]: time="2025-12-16T13:07:45.645061496Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:07:45.650228 containerd[1547]: time="2025-12-16T13:07:45.650190664Z" level=info msg="StopContainer for \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" returns successfully" Dec 16 13:07:45.650765 containerd[1547]: time="2025-12-16T13:07:45.650730676Z" level=info msg="StopPodSandbox for \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\"" Dec 16 13:07:45.650815 containerd[1547]: time="2025-12-16T13:07:45.650793808Z" level=info msg="Container to stop \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:07:45.650849 containerd[1547]: time="2025-12-16T13:07:45.650813857Z" level=info msg="Container to stop \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:07:45.650849 containerd[1547]: time="2025-12-16T13:07:45.650825189Z" level=info msg="Container to stop \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:07:45.650849 containerd[1547]: time="2025-12-16T13:07:45.650836801Z" level=info msg="Container to stop \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:07:45.650937 containerd[1547]: time="2025-12-16T13:07:45.650848373Z" level=info msg="Container to stop \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:07:45.658529 systemd[1]: cri-containerd-bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79.scope: Deactivated successfully. Dec 16 13:07:45.667924 containerd[1547]: time="2025-12-16T13:07:45.667299027Z" level=info msg="received sandbox exit event container_id:\"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" id:\"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" exit_status:137 exited_at:{seconds:1765890465 nanos:666190348}" monitor_name=podsandbox Dec 16 13:07:45.677527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb-shm.mount: Deactivated successfully. Dec 16 13:07:45.679538 containerd[1547]: time="2025-12-16T13:07:45.679486556Z" level=info msg="TearDown network for sandbox \"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\" successfully" Dec 16 13:07:45.679538 containerd[1547]: time="2025-12-16T13:07:45.679518968Z" level=info msg="StopPodSandbox for \"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\" returns successfully" Dec 16 13:07:45.686943 containerd[1547]: time="2025-12-16T13:07:45.686859553Z" level=info msg="received sandbox container exit event sandbox_id:\"0db9ff0506b06a28f6fd5cf2f902d48ebab8868587778cf5d5bedc8e26fd5fbb\" exit_status:137 exited_at:{seconds:1765890465 nanos:616718763}" monitor_name=criService Dec 16 13:07:45.693678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79-rootfs.mount: Deactivated successfully. Dec 16 13:07:45.698299 containerd[1547]: time="2025-12-16T13:07:45.698237669Z" level=info msg="shim disconnected" id=bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79 namespace=k8s.io Dec 16 13:07:45.698415 containerd[1547]: time="2025-12-16T13:07:45.698338403Z" level=warning msg="cleaning up after shim disconnected" id=bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79 namespace=k8s.io Dec 16 13:07:45.698415 containerd[1547]: time="2025-12-16T13:07:45.698348343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:07:45.710776 containerd[1547]: time="2025-12-16T13:07:45.710730788Z" level=info msg="received sandbox container exit event sandbox_id:\"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" exit_status:137 exited_at:{seconds:1765890465 nanos:666190348}" monitor_name=criService Dec 16 13:07:45.711252 containerd[1547]: time="2025-12-16T13:07:45.711208258Z" level=info msg="TearDown network for sandbox \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" successfully" Dec 16 13:07:45.711252 containerd[1547]: time="2025-12-16T13:07:45.711242855Z" level=info msg="StopPodSandbox for \"bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79\" returns successfully" Dec 16 13:07:45.796136 kubelet[2694]: I1216 13:07:45.795857 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-xtables-lock\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796136 kubelet[2694]: I1216 13:07:45.795957 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-cgroup\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796136 kubelet[2694]: I1216 13:07:45.795973 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-kernel\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796136 kubelet[2694]: I1216 13:07:45.795997 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5de33345-e876-467f-b67c-beadd8290182-cilium-config-path\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796136 kubelet[2694]: I1216 13:07:45.795996 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.796630 kubelet[2694]: I1216 13:07:45.795996 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.796630 kubelet[2694]: I1216 13:07:45.796021 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz7ff\" (UniqueName: \"kubernetes.io/projected/446aa47b-5e7d-45a8-bb40-167ed9a8504f-kube-api-access-vz7ff\") pod \"446aa47b-5e7d-45a8-bb40-167ed9a8504f\" (UID: \"446aa47b-5e7d-45a8-bb40-167ed9a8504f\") " Dec 16 13:07:45.796630 kubelet[2694]: I1216 13:07:45.796106 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-hostproc\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796630 kubelet[2694]: I1216 13:07:45.796127 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cni-path\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796630 kubelet[2694]: I1216 13:07:45.796144 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-bpf-maps\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796630 kubelet[2694]: I1216 13:07:45.796163 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-lib-modules\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796770 kubelet[2694]: I1216 13:07:45.796187 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5de33345-e876-467f-b67c-beadd8290182-clustermesh-secrets\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796770 kubelet[2694]: I1216 13:07:45.796205 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-net\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796770 kubelet[2694]: I1216 13:07:45.796227 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/446aa47b-5e7d-45a8-bb40-167ed9a8504f-cilium-config-path\") pod \"446aa47b-5e7d-45a8-bb40-167ed9a8504f\" (UID: \"446aa47b-5e7d-45a8-bb40-167ed9a8504f\") " Dec 16 13:07:45.796770 kubelet[2694]: I1216 13:07:45.796248 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-run\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796770 kubelet[2694]: I1216 13:07:45.796268 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-etc-cni-netd\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796770 kubelet[2694]: I1216 13:07:45.796292 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-hubble-tls\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796945 kubelet[2694]: I1216 13:07:45.796314 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw598\" (UniqueName: \"kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-kube-api-access-cw598\") pod \"5de33345-e876-467f-b67c-beadd8290182\" (UID: \"5de33345-e876-467f-b67c-beadd8290182\") " Dec 16 13:07:45.796945 kubelet[2694]: I1216 13:07:45.796366 2694 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.796945 kubelet[2694]: I1216 13:07:45.796381 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.796945 kubelet[2694]: I1216 13:07:45.796719 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.798188 kubelet[2694]: I1216 13:07:45.798157 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cni-path" (OuterVolumeSpecName: "cni-path") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.798234 kubelet[2694]: I1216 13:07:45.798208 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-hostproc" (OuterVolumeSpecName: "hostproc") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.798234 kubelet[2694]: I1216 13:07:45.798229 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.798287 kubelet[2694]: I1216 13:07:45.798246 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.798287 kubelet[2694]: I1216 13:07:45.798265 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.798287 kubelet[2694]: I1216 13:07:45.798281 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.800589 kubelet[2694]: I1216 13:07:45.800332 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de33345-e876-467f-b67c-beadd8290182-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:07:45.800589 kubelet[2694]: I1216 13:07:45.800372 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:07:45.800589 kubelet[2694]: I1216 13:07:45.800533 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446aa47b-5e7d-45a8-bb40-167ed9a8504f-kube-api-access-vz7ff" (OuterVolumeSpecName: "kube-api-access-vz7ff") pod "446aa47b-5e7d-45a8-bb40-167ed9a8504f" (UID: "446aa47b-5e7d-45a8-bb40-167ed9a8504f"). InnerVolumeSpecName "kube-api-access-vz7ff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:07:45.800688 kubelet[2694]: I1216 13:07:45.800620 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de33345-e876-467f-b67c-beadd8290182-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:07:45.801455 kubelet[2694]: I1216 13:07:45.801434 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-kube-api-access-cw598" (OuterVolumeSpecName: "kube-api-access-cw598") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "kube-api-access-cw598". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:07:45.803176 kubelet[2694]: I1216 13:07:45.803157 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5de33345-e876-467f-b67c-beadd8290182" (UID: "5de33345-e876-467f-b67c-beadd8290182"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:07:45.803283 kubelet[2694]: I1216 13:07:45.803263 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446aa47b-5e7d-45a8-bb40-167ed9a8504f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "446aa47b-5e7d-45a8-bb40-167ed9a8504f" (UID: "446aa47b-5e7d-45a8-bb40-167ed9a8504f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:07:45.897140 kubelet[2694]: I1216 13:07:45.897098 2694 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897140 kubelet[2694]: I1216 13:07:45.897127 2694 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897140 kubelet[2694]: I1216 13:07:45.897135 2694 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897140 kubelet[2694]: I1216 13:07:45.897143 2694 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897140 kubelet[2694]: I1216 13:07:45.897154 2694 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5de33345-e876-467f-b67c-beadd8290182-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897163 2694 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897171 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/446aa47b-5e7d-45a8-bb40-167ed9a8504f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897182 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897190 2694 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897199 2694 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897206 2694 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cw598\" (UniqueName: \"kubernetes.io/projected/5de33345-e876-467f-b67c-beadd8290182-kube-api-access-cw598\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897215 2694 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5de33345-e876-467f-b67c-beadd8290182-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897370 kubelet[2694]: I1216 13:07:45.897222 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5de33345-e876-467f-b67c-beadd8290182-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:45.897538 kubelet[2694]: I1216 13:07:45.897229 2694 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vz7ff\" (UniqueName: \"kubernetes.io/projected/446aa47b-5e7d-45a8-bb40-167ed9a8504f-kube-api-access-vz7ff\") on node \"localhost\" DevicePath \"\"" Dec 16 13:07:46.068308 kubelet[2694]: I1216 13:07:46.068274 2694 scope.go:117] "RemoveContainer" containerID="5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36" Dec 16 13:07:46.070969 containerd[1547]: time="2025-12-16T13:07:46.070902549Z" level=info msg="RemoveContainer for \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\"" Dec 16 13:07:46.077718 containerd[1547]: time="2025-12-16T13:07:46.077633627Z" level=info msg="RemoveContainer for \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" returns successfully" Dec 16 13:07:46.077911 kubelet[2694]: I1216 13:07:46.077891 2694 scope.go:117] "RemoveContainer" containerID="77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8" Dec 16 13:07:46.079119 containerd[1547]: time="2025-12-16T13:07:46.079065779Z" level=info msg="RemoveContainer for \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\"" Dec 16 13:07:46.079736 systemd[1]: Removed slice kubepods-besteffort-pod446aa47b_5e7d_45a8_bb40_167ed9a8504f.slice - libcontainer container kubepods-besteffort-pod446aa47b_5e7d_45a8_bb40_167ed9a8504f.slice. Dec 16 13:07:46.081269 systemd[1]: Removed slice kubepods-burstable-pod5de33345_e876_467f_b67c_beadd8290182.slice - libcontainer container kubepods-burstable-pod5de33345_e876_467f_b67c_beadd8290182.slice. Dec 16 13:07:46.081359 systemd[1]: kubepods-burstable-pod5de33345_e876_467f_b67c_beadd8290182.slice: Consumed 6.362s CPU time, 125.1M memory peak, 240K read from disk, 16.6M written to disk. Dec 16 13:07:46.086526 containerd[1547]: time="2025-12-16T13:07:46.086487138Z" level=info msg="RemoveContainer for \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\" returns successfully" Dec 16 13:07:46.086750 kubelet[2694]: I1216 13:07:46.086729 2694 scope.go:117] "RemoveContainer" containerID="14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb" Dec 16 13:07:46.090226 containerd[1547]: time="2025-12-16T13:07:46.089060581Z" level=info msg="RemoveContainer for \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\"" Dec 16 13:07:46.096661 containerd[1547]: time="2025-12-16T13:07:46.096619556Z" level=info msg="RemoveContainer for \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\" returns successfully" Dec 16 13:07:46.096939 kubelet[2694]: I1216 13:07:46.096822 2694 scope.go:117] "RemoveContainer" containerID="80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc" Dec 16 13:07:46.098521 containerd[1547]: time="2025-12-16T13:07:46.098487598Z" level=info msg="RemoveContainer for \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\"" Dec 16 13:07:46.104346 containerd[1547]: time="2025-12-16T13:07:46.104326257Z" level=info msg="RemoveContainer for \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\" returns successfully" Dec 16 13:07:46.104475 kubelet[2694]: I1216 13:07:46.104459 2694 scope.go:117] "RemoveContainer" containerID="02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc" Dec 16 13:07:46.105773 containerd[1547]: time="2025-12-16T13:07:46.105740864Z" level=info msg="RemoveContainer for \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\"" Dec 16 13:07:46.110176 containerd[1547]: time="2025-12-16T13:07:46.110144455Z" level=info msg="RemoveContainer for \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\" returns successfully" Dec 16 13:07:46.110357 kubelet[2694]: I1216 13:07:46.110321 2694 scope.go:117] "RemoveContainer" containerID="5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36" Dec 16 13:07:46.110561 containerd[1547]: time="2025-12-16T13:07:46.110531191Z" level=error msg="ContainerStatus for \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\": not found" Dec 16 13:07:46.110681 kubelet[2694]: E1216 13:07:46.110656 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\": not found" containerID="5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36" Dec 16 13:07:46.110724 kubelet[2694]: I1216 13:07:46.110687 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36"} err="failed to get container status \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f7932ebabf326562100f15e7d7fbe80254d828eef124c991f4040c39b49ab36\": not found" Dec 16 13:07:46.110754 kubelet[2694]: I1216 13:07:46.110724 2694 scope.go:117] "RemoveContainer" containerID="77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8" Dec 16 13:07:46.110897 containerd[1547]: time="2025-12-16T13:07:46.110852260Z" level=error msg="ContainerStatus for \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\": not found" Dec 16 13:07:46.111014 kubelet[2694]: E1216 13:07:46.110991 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\": not found" containerID="77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8" Dec 16 13:07:46.111053 kubelet[2694]: I1216 13:07:46.111018 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8"} err="failed to get container status \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\": rpc error: code = NotFound desc = an error occurred when try to find container \"77978480512fe94b9e03fb6beb92ec386422c0a2a7ccc127c21f15a51b233de8\": not found" Dec 16 13:07:46.111053 kubelet[2694]: I1216 13:07:46.111037 2694 scope.go:117] "RemoveContainer" containerID="14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb" Dec 16 13:07:46.111169 containerd[1547]: time="2025-12-16T13:07:46.111147609Z" level=error msg="ContainerStatus for \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\": not found" Dec 16 13:07:46.111254 kubelet[2694]: E1216 13:07:46.111234 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\": not found" containerID="14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb" Dec 16 13:07:46.111302 kubelet[2694]: I1216 13:07:46.111255 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb"} err="failed to get container status \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\": rpc error: code = NotFound desc = an error occurred when try to find container \"14f27cf303267385d67777dc09b0b1914d9e5445eaaa92d7e0ceb9fa304f3dfb\": not found" Dec 16 13:07:46.111302 kubelet[2694]: I1216 13:07:46.111268 2694 scope.go:117] "RemoveContainer" containerID="80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc" Dec 16 13:07:46.111419 containerd[1547]: time="2025-12-16T13:07:46.111396910Z" level=error msg="ContainerStatus for \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\": not found" Dec 16 13:07:46.111511 kubelet[2694]: E1216 13:07:46.111490 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\": not found" containerID="80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc" Dec 16 13:07:46.111546 kubelet[2694]: I1216 13:07:46.111507 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc"} err="failed to get container status \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"80a756242519627659267f5785ac0c2238762883abc74678d39c1e8956d2f2cc\": not found" Dec 16 13:07:46.111546 kubelet[2694]: I1216 13:07:46.111518 2694 scope.go:117] "RemoveContainer" containerID="02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc" Dec 16 13:07:46.111665 containerd[1547]: time="2025-12-16T13:07:46.111633396Z" level=error msg="ContainerStatus for \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\": not found" Dec 16 13:07:46.111745 kubelet[2694]: E1216 13:07:46.111724 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\": not found" containerID="02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc" Dec 16 13:07:46.111773 kubelet[2694]: I1216 13:07:46.111743 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc"} err="failed to get container status \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"02fc1976a25a9a99e75fcdfcafb604f58595a532e645f422c6b164624283b1fc\": not found" Dec 16 13:07:46.111773 kubelet[2694]: I1216 13:07:46.111755 2694 scope.go:117] "RemoveContainer" containerID="6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0" Dec 16 13:07:46.125277 containerd[1547]: time="2025-12-16T13:07:46.125254730Z" level=info msg="RemoveContainer for \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\"" Dec 16 13:07:46.128734 containerd[1547]: time="2025-12-16T13:07:46.128707428Z" level=info msg="RemoveContainer for \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" returns successfully" Dec 16 13:07:46.128837 kubelet[2694]: I1216 13:07:46.128818 2694 scope.go:117] "RemoveContainer" containerID="6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0" Dec 16 13:07:46.128975 containerd[1547]: time="2025-12-16T13:07:46.128953944Z" level=error msg="ContainerStatus for \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\": not found" Dec 16 13:07:46.129135 kubelet[2694]: E1216 13:07:46.129108 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\": not found" containerID="6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0" Dec 16 13:07:46.129178 kubelet[2694]: I1216 13:07:46.129144 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0"} err="failed to get container status \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e23c6fa0fbea95bd155b391fbf09043ab2d345612a607c7673f1308d05e39f0\": not found" Dec 16 13:07:46.584252 systemd[1]: var-lib-kubelet-pods-446aa47b\x2d5e7d\x2d45a8\x2dbb40\x2d167ed9a8504f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvz7ff.mount: Deactivated successfully. Dec 16 13:07:46.584397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bab5fc8fcd99973fca581044dedbb3767faae6d82c9ce695e55492af23c3ce79-shm.mount: Deactivated successfully. Dec 16 13:07:46.584495 systemd[1]: var-lib-kubelet-pods-5de33345\x2de876\x2d467f\x2db67c\x2dbeadd8290182-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcw598.mount: Deactivated successfully. Dec 16 13:07:46.584594 systemd[1]: var-lib-kubelet-pods-5de33345\x2de876\x2d467f\x2db67c\x2dbeadd8290182-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:07:46.584698 systemd[1]: var-lib-kubelet-pods-5de33345\x2de876\x2d467f\x2db67c\x2dbeadd8290182-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:07:46.860053 kubelet[2694]: I1216 13:07:46.859940 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446aa47b-5e7d-45a8-bb40-167ed9a8504f" path="/var/lib/kubelet/pods/446aa47b-5e7d-45a8-bb40-167ed9a8504f/volumes" Dec 16 13:07:46.860472 kubelet[2694]: I1216 13:07:46.860452 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de33345-e876-467f-b67c-beadd8290182" path="/var/lib/kubelet/pods/5de33345-e876-467f-b67c-beadd8290182/volumes" Dec 16 13:07:47.474962 sshd[4299]: Connection closed by 10.0.0.1 port 58040 Dec 16 13:07:47.475586 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:47.488707 systemd[1]: sshd@22-10.0.0.87:22-10.0.0.1:58040.service: Deactivated successfully. Dec 16 13:07:47.490707 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:07:47.491540 systemd-logind[1530]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:07:47.494249 systemd[1]: Started sshd@23-10.0.0.87:22-10.0.0.1:58044.service - OpenSSH per-connection server daemon (10.0.0.1:58044). Dec 16 13:07:47.495242 systemd-logind[1530]: Removed session 23. Dec 16 13:07:47.552317 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 58044 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:47.554385 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:47.558880 systemd-logind[1530]: New session 24 of user core. Dec 16 13:07:47.569011 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:07:47.907960 kubelet[2694]: E1216 13:07:47.907916 2694 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:07:48.420897 sshd[4449]: Connection closed by 10.0.0.1 port 58044 Dec 16 13:07:48.422028 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:48.436338 systemd[1]: sshd@23-10.0.0.87:22-10.0.0.1:58044.service: Deactivated successfully. Dec 16 13:07:48.439743 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:07:48.441368 systemd-logind[1530]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:07:48.445493 systemd-logind[1530]: Removed session 24. Dec 16 13:07:48.448054 systemd[1]: Started sshd@24-10.0.0.87:22-10.0.0.1:58056.service - OpenSSH per-connection server daemon (10.0.0.1:58056). Dec 16 13:07:48.460796 systemd[1]: Created slice kubepods-burstable-pod19258d27_a54f_4272_ac8a_130bf7a7612f.slice - libcontainer container kubepods-burstable-pod19258d27_a54f_4272_ac8a_130bf7a7612f.slice. Dec 16 13:07:48.499126 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 58056 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:48.500360 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:48.504925 systemd-logind[1530]: New session 25 of user core. Dec 16 13:07:48.515448 kubelet[2694]: I1216 13:07:48.515394 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/19258d27-a54f-4272-ac8a-130bf7a7612f-cilium-ipsec-secrets\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515448 kubelet[2694]: I1216 13:07:48.515447 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19258d27-a54f-4272-ac8a-130bf7a7612f-hubble-tls\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515532 kubelet[2694]: I1216 13:07:48.515468 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-bpf-maps\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515604 kubelet[2694]: I1216 13:07:48.515566 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-cni-path\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515643 kubelet[2694]: I1216 13:07:48.515614 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcqq\" (UniqueName: \"kubernetes.io/projected/19258d27-a54f-4272-ac8a-130bf7a7612f-kube-api-access-dfcqq\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515688 kubelet[2694]: I1216 13:07:48.515671 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-host-proc-sys-net\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515717 kubelet[2694]: I1216 13:07:48.515700 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-xtables-lock\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515742 kubelet[2694]: I1216 13:07:48.515720 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19258d27-a54f-4272-ac8a-130bf7a7612f-cilium-config-path\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515763 kubelet[2694]: I1216 13:07:48.515740 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19258d27-a54f-4272-ac8a-130bf7a7612f-clustermesh-secrets\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515792 kubelet[2694]: I1216 13:07:48.515762 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-hostproc\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515792 kubelet[2694]: I1216 13:07:48.515784 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-cilium-cgroup\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515840 kubelet[2694]: I1216 13:07:48.515806 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-cilium-run\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515840 kubelet[2694]: I1216 13:07:48.515832 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-lib-modules\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515926 kubelet[2694]: I1216 13:07:48.515852 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-host-proc-sys-kernel\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.515926 kubelet[2694]: I1216 13:07:48.515899 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19258d27-a54f-4272-ac8a-130bf7a7612f-etc-cni-netd\") pod \"cilium-r77bv\" (UID: \"19258d27-a54f-4272-ac8a-130bf7a7612f\") " pod="kube-system/cilium-r77bv" Dec 16 13:07:48.516116 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:07:48.568222 sshd[4464]: Connection closed by 10.0.0.1 port 58056 Dec 16 13:07:48.568532 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:48.581635 systemd[1]: sshd@24-10.0.0.87:22-10.0.0.1:58056.service: Deactivated successfully. Dec 16 13:07:48.583526 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:07:48.584300 systemd-logind[1530]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:07:48.587136 systemd[1]: Started sshd@25-10.0.0.87:22-10.0.0.1:58072.service - OpenSSH per-connection server daemon (10.0.0.1:58072). Dec 16 13:07:48.588036 systemd-logind[1530]: Removed session 25. Dec 16 13:07:48.650562 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 58072 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:07:48.652330 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:48.656919 systemd-logind[1530]: New session 26 of user core. Dec 16 13:07:48.668038 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:07:48.769107 kubelet[2694]: E1216 13:07:48.768984 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:48.773390 containerd[1547]: time="2025-12-16T13:07:48.773173324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r77bv,Uid:19258d27-a54f-4272-ac8a-130bf7a7612f,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:48.788951 containerd[1547]: time="2025-12-16T13:07:48.788562895Z" level=info msg="connecting to shim 031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14" address="unix:///run/containerd/s/51cef5859d46f8c21081e8ab8f030c5afafd99e8221a54f7f722c9ee409373ff" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:48.812023 systemd[1]: Started cri-containerd-031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14.scope - libcontainer container 031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14. Dec 16 13:07:48.841690 containerd[1547]: time="2025-12-16T13:07:48.841640871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r77bv,Uid:19258d27-a54f-4272-ac8a-130bf7a7612f,Namespace:kube-system,Attempt:0,} returns sandbox id \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\"" Dec 16 13:07:48.842407 kubelet[2694]: E1216 13:07:48.842386 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:48.850193 containerd[1547]: time="2025-12-16T13:07:48.850136078Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:07:48.857208 containerd[1547]: time="2025-12-16T13:07:48.857138030Z" level=info msg="Container a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:48.863996 containerd[1547]: time="2025-12-16T13:07:48.863955817Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d\"" Dec 16 13:07:48.865014 containerd[1547]: time="2025-12-16T13:07:48.864938158Z" level=info msg="StartContainer for \"a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d\"" Dec 16 13:07:48.866708 containerd[1547]: time="2025-12-16T13:07:48.866647259Z" level=info msg="connecting to shim a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d" address="unix:///run/containerd/s/51cef5859d46f8c21081e8ab8f030c5afafd99e8221a54f7f722c9ee409373ff" protocol=ttrpc version=3 Dec 16 13:07:48.890115 systemd[1]: Started cri-containerd-a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d.scope - libcontainer container a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d. Dec 16 13:07:48.995650 systemd[1]: cri-containerd-a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d.scope: Deactivated successfully. Dec 16 13:07:49.047376 containerd[1547]: time="2025-12-16T13:07:49.047301049Z" level=info msg="received container exit event container_id:\"a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d\" id:\"a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d\" pid:4545 exited_at:{seconds:1765890468 nanos:997830029}" Dec 16 13:07:49.048441 containerd[1547]: time="2025-12-16T13:07:49.048419070Z" level=info msg="StartContainer for \"a4ddb5371fde0624512246348d5629b774d319f86776023da5a24f6a9b66525d\" returns successfully" Dec 16 13:07:49.078044 kubelet[2694]: E1216 13:07:49.078005 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:50.081469 kubelet[2694]: E1216 13:07:50.081438 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:50.387738 containerd[1547]: time="2025-12-16T13:07:50.387539333Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:07:50.676169 containerd[1547]: time="2025-12-16T13:07:50.676003415Z" level=info msg="Container 46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:50.983971 containerd[1547]: time="2025-12-16T13:07:50.983830185Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c\"" Dec 16 13:07:50.984327 containerd[1547]: time="2025-12-16T13:07:50.984303736Z" level=info msg="StartContainer for \"46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c\"" Dec 16 13:07:50.985092 containerd[1547]: time="2025-12-16T13:07:50.985059027Z" level=info msg="connecting to shim 46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c" address="unix:///run/containerd/s/51cef5859d46f8c21081e8ab8f030c5afafd99e8221a54f7f722c9ee409373ff" protocol=ttrpc version=3 Dec 16 13:07:51.008032 systemd[1]: Started cri-containerd-46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c.scope - libcontainer container 46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c. Dec 16 13:07:51.053223 systemd[1]: cri-containerd-46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c.scope: Deactivated successfully. Dec 16 13:07:51.087287 containerd[1547]: time="2025-12-16T13:07:51.087159045Z" level=info msg="received container exit event container_id:\"46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c\" id:\"46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c\" pid:4592 exited_at:{seconds:1765890471 nanos:53506210}" Dec 16 13:07:51.088475 containerd[1547]: time="2025-12-16T13:07:51.088439946Z" level=info msg="StartContainer for \"46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c\" returns successfully" Dec 16 13:07:51.111914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46616b88cac03d0e0fcc31817e613c935533aad6ea15a3f04f3bd0f3b44dcf2c-rootfs.mount: Deactivated successfully. Dec 16 13:07:52.095237 kubelet[2694]: E1216 13:07:52.095207 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:52.105920 containerd[1547]: time="2025-12-16T13:07:52.103146378Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:07:52.123432 containerd[1547]: time="2025-12-16T13:07:52.123389570Z" level=info msg="Container d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:52.131083 containerd[1547]: time="2025-12-16T13:07:52.131051594Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9\"" Dec 16 13:07:52.132885 containerd[1547]: time="2025-12-16T13:07:52.131527548Z" level=info msg="StartContainer for \"d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9\"" Dec 16 13:07:52.132885 containerd[1547]: time="2025-12-16T13:07:52.132700160Z" level=info msg="connecting to shim d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9" address="unix:///run/containerd/s/51cef5859d46f8c21081e8ab8f030c5afafd99e8221a54f7f722c9ee409373ff" protocol=ttrpc version=3 Dec 16 13:07:52.159009 systemd[1]: Started cri-containerd-d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9.scope - libcontainer container d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9. Dec 16 13:07:52.258089 containerd[1547]: time="2025-12-16T13:07:52.258045691Z" level=info msg="StartContainer for \"d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9\" returns successfully" Dec 16 13:07:52.258537 systemd[1]: cri-containerd-d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9.scope: Deactivated successfully. Dec 16 13:07:52.260509 containerd[1547]: time="2025-12-16T13:07:52.260458965Z" level=info msg="received container exit event container_id:\"d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9\" id:\"d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9\" pid:4636 exited_at:{seconds:1765890472 nanos:260160672}" Dec 16 13:07:52.287755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d60ee2cd20570692bf68f86e581f4b9ce59494703b14dca8be4c1564da6027c9-rootfs.mount: Deactivated successfully. Dec 16 13:07:52.908563 kubelet[2694]: E1216 13:07:52.908522 2694 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:07:53.099643 kubelet[2694]: E1216 13:07:53.099570 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:53.105479 containerd[1547]: time="2025-12-16T13:07:53.105378803Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:07:53.115569 containerd[1547]: time="2025-12-16T13:07:53.115499767Z" level=info msg="Container 3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:53.123559 containerd[1547]: time="2025-12-16T13:07:53.123511321Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297\"" Dec 16 13:07:53.124216 containerd[1547]: time="2025-12-16T13:07:53.124185215Z" level=info msg="StartContainer for \"3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297\"" Dec 16 13:07:53.125449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901119657.mount: Deactivated successfully. Dec 16 13:07:53.125674 containerd[1547]: time="2025-12-16T13:07:53.125628044Z" level=info msg="connecting to shim 3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297" address="unix:///run/containerd/s/51cef5859d46f8c21081e8ab8f030c5afafd99e8221a54f7f722c9ee409373ff" protocol=ttrpc version=3 Dec 16 13:07:53.149160 systemd[1]: Started cri-containerd-3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297.scope - libcontainer container 3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297. Dec 16 13:07:53.183039 systemd[1]: cri-containerd-3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297.scope: Deactivated successfully. Dec 16 13:07:53.184917 containerd[1547]: time="2025-12-16T13:07:53.184413390Z" level=info msg="received container exit event container_id:\"3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297\" id:\"3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297\" pid:4674 exited_at:{seconds:1765890473 nanos:183290275}" Dec 16 13:07:53.192720 containerd[1547]: time="2025-12-16T13:07:53.192689213Z" level=info msg="StartContainer for \"3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297\" returns successfully" Dec 16 13:07:53.206074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f0a1210f2b48b4bf97eaa5a066521427b0ec730e2252af2d2307529c693c297-rootfs.mount: Deactivated successfully. Dec 16 13:07:54.104676 kubelet[2694]: E1216 13:07:54.104637 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:54.109899 containerd[1547]: time="2025-12-16T13:07:54.109837981Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:07:54.120167 containerd[1547]: time="2025-12-16T13:07:54.119358992Z" level=info msg="Container 6c802972e7a08124add893c4214ab570eb2695943213b99d9233b0495532f272: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:54.126784 containerd[1547]: time="2025-12-16T13:07:54.126736232Z" level=info msg="CreateContainer within sandbox \"031b77984b7f91b433965d14c56ac14d72014b263c7d7d1286f451a2821ddc14\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c802972e7a08124add893c4214ab570eb2695943213b99d9233b0495532f272\"" Dec 16 13:07:54.127228 containerd[1547]: time="2025-12-16T13:07:54.127202927Z" level=info msg="StartContainer for \"6c802972e7a08124add893c4214ab570eb2695943213b99d9233b0495532f272\"" Dec 16 13:07:54.128172 containerd[1547]: time="2025-12-16T13:07:54.128149423Z" level=info msg="connecting to shim 6c802972e7a08124add893c4214ab570eb2695943213b99d9233b0495532f272" address="unix:///run/containerd/s/51cef5859d46f8c21081e8ab8f030c5afafd99e8221a54f7f722c9ee409373ff" protocol=ttrpc version=3 Dec 16 13:07:54.153309 systemd[1]: Started cri-containerd-6c802972e7a08124add893c4214ab570eb2695943213b99d9233b0495532f272.scope - libcontainer container 6c802972e7a08124add893c4214ab570eb2695943213b99d9233b0495532f272. Dec 16 13:07:54.215786 containerd[1547]: time="2025-12-16T13:07:54.215737014Z" level=info msg="StartContainer for \"6c802972e7a08124add893c4214ab570eb2695943213b99d9233b0495532f272\" returns successfully" Dec 16 13:07:54.614895 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:07:54.960379 kubelet[2694]: I1216 13:07:54.960236 2694 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:07:54Z","lastTransitionTime":"2025-12-16T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:07:55.108974 kubelet[2694]: E1216 13:07:55.108902 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:56.113108 kubelet[2694]: E1216 13:07:56.113070 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:56.857734 kubelet[2694]: E1216 13:07:56.857699 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:57.115308 kubelet[2694]: E1216 13:07:57.115185 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:58.018452 systemd-networkd[1455]: lxc_health: Link UP Dec 16 13:07:58.019621 systemd-networkd[1455]: lxc_health: Gained carrier Dec 16 13:07:58.770703 kubelet[2694]: E1216 13:07:58.770658 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:58.793479 kubelet[2694]: I1216 13:07:58.793406 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r77bv" podStartSLOduration=10.793391054 podStartE2EDuration="10.793391054s" podCreationTimestamp="2025-12-16 13:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:55.476743648 +0000 UTC m=+82.700710823" watchObservedRunningTime="2025-12-16 13:07:58.793391054 +0000 UTC m=+86.017358219" Dec 16 13:07:59.118975 kubelet[2694]: E1216 13:07:59.118930 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:07:59.974133 systemd-networkd[1455]: lxc_health: Gained IPv6LL Dec 16 13:08:00.121087 kubelet[2694]: E1216 13:08:00.121053 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:08:01.857860 kubelet[2694]: E1216 13:08:01.857804 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:08:02.858210 kubelet[2694]: E1216 13:08:02.858180 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:08:03.770296 sshd[4480]: Connection closed by 10.0.0.1 port 58072 Dec 16 13:08:03.770645 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:03.775157 systemd[1]: sshd@25-10.0.0.87:22-10.0.0.1:58072.service: Deactivated successfully. Dec 16 13:08:03.777161 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:08:03.777891 systemd-logind[1530]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:08:03.779350 systemd-logind[1530]: Removed session 26.