Dec 16 13:12:14.862546 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:12:14.862574 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:12:14.862586 kernel: BIOS-provided physical RAM map: Dec 16 13:12:14.862597 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:12:14.862606 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:12:14.862615 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:12:14.862625 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:12:14.862633 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:12:14.862642 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:12:14.862650 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:12:14.862659 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 16 13:12:14.862668 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 16 13:12:14.862679 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 16 13:12:14.862688 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 16 13:12:14.862699 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 16 13:12:14.862708 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 16 13:12:14.862718 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 16 13:12:14.862729 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 16 13:12:14.862739 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 16 13:12:14.862748 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 16 13:12:14.862757 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 16 13:12:14.862766 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 16 13:12:14.862774 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:12:14.862783 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:12:14.862792 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:12:14.862802 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:12:14.862811 kernel: NX (Execute Disable) protection: active Dec 16 13:12:14.862820 kernel: APIC: Static calls initialized Dec 16 13:12:14.862832 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Dec 16 13:12:14.862841 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Dec 16 13:12:14.862851 kernel: extended physical RAM map: Dec 16 13:12:14.862860 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:12:14.862869 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:12:14.862878 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:12:14.862887 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:12:14.862896 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:12:14.862905 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:12:14.862914 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:12:14.862924 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Dec 16 13:12:14.862936 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Dec 16 13:12:14.862949 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Dec 16 13:12:14.862959 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Dec 16 13:12:14.862968 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Dec 16 13:12:14.862978 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 16 13:12:14.862990 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 16 13:12:14.862999 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 16 13:12:14.863008 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 16 13:12:14.863017 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 16 13:12:14.863026 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 16 13:12:14.863035 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 16 13:12:14.863045 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 16 13:12:14.863054 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 16 13:12:14.863065 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 16 13:12:14.863074 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 16 13:12:14.863084 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:12:14.863096 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:12:14.863106 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:12:14.863116 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:12:14.863135 kernel: efi: EFI v2.7 by EDK II Dec 16 13:12:14.863145 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Dec 16 13:12:14.863155 kernel: random: crng init done Dec 16 13:12:14.863165 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 16 13:12:14.863174 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 16 13:12:14.863184 kernel: secureboot: Secure boot disabled Dec 16 13:12:14.863194 kernel: SMBIOS 2.8 present. Dec 16 13:12:14.863203 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 13:12:14.863215 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:12:14.863225 kernel: Hypervisor detected: KVM Dec 16 13:12:14.863247 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 16 13:12:14.863268 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:12:14.863290 kernel: kvm-clock: using sched offset of 4147405647 cycles Dec 16 13:12:14.863300 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:12:14.863311 kernel: tsc: Detected 2794.748 MHz processor Dec 16 13:12:14.863321 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:12:14.863331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:12:14.863340 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 16 13:12:14.863350 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:12:14.863363 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:12:14.863373 kernel: Using GB pages for direct mapping Dec 16 13:12:14.863383 kernel: ACPI: Early table checksum verification disabled Dec 16 13:12:14.863393 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 16 13:12:14.863404 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 13:12:14.863414 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:12:14.863424 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:12:14.863433 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 16 13:12:14.863456 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:12:14.863469 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:12:14.863478 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:12:14.863488 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:12:14.863497 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 13:12:14.863506 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 16 13:12:14.863516 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Dec 16 13:12:14.863525 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 16 13:12:14.863543 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 16 13:12:14.863565 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 16 13:12:14.863575 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 16 13:12:14.863586 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 16 13:12:14.863596 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 16 13:12:14.863605 kernel: No NUMA configuration found Dec 16 13:12:14.863615 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 16 13:12:14.863625 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Dec 16 13:12:14.863635 kernel: Zone ranges: Dec 16 13:12:14.863645 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:12:14.863655 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 16 13:12:14.863669 kernel: Normal empty Dec 16 13:12:14.863678 kernel: Device empty Dec 16 13:12:14.863688 kernel: Movable zone start for each node Dec 16 13:12:14.863698 kernel: Early memory node ranges Dec 16 13:12:14.863708 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:12:14.863718 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 16 13:12:14.863728 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 16 13:12:14.863738 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 16 13:12:14.863748 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 16 13:12:14.863761 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 16 13:12:14.863771 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Dec 16 13:12:14.863781 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Dec 16 13:12:14.863791 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 16 13:12:14.863801 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:12:14.863820 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:12:14.863833 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 16 13:12:14.863844 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:12:14.863854 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 16 13:12:14.863864 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 16 13:12:14.863874 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:12:14.863884 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 13:12:14.863895 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 16 13:12:14.863908 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:12:14.863918 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:12:14.863929 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:12:14.863939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:12:14.863950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:12:14.863963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:12:14.863974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:12:14.863984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:12:14.863995 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:12:14.864005 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:12:14.864015 kernel: TSC deadline timer available Dec 16 13:12:14.864026 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:12:14.864037 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:12:14.864047 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:12:14.864061 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:12:14.864071 kernel: CPU topo: Num. cores per package: 4 Dec 16 13:12:14.864082 kernel: CPU topo: Num. threads per package: 4 Dec 16 13:12:14.864092 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 13:12:14.864102 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:12:14.864113 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:12:14.864133 kernel: kvm-guest: setup PV sched yield Dec 16 13:12:14.864144 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 13:12:14.864154 kernel: Booting paravirtualized kernel on KVM Dec 16 13:12:14.864168 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:12:14.864179 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 13:12:14.864190 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 13:12:14.864201 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 13:12:14.864212 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 13:12:14.864222 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:12:14.864233 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:12:14.864245 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:12:14.864258 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:12:14.864269 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:12:14.864279 kernel: Fallback order for Node 0: 0 Dec 16 13:12:14.864290 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Dec 16 13:12:14.864300 kernel: Policy zone: DMA32 Dec 16 13:12:14.864310 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:12:14.864321 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 13:12:14.864331 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:12:14.864342 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:12:14.864356 kernel: Dynamic Preempt: voluntary Dec 16 13:12:14.864367 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:12:14.864378 kernel: rcu: RCU event tracing is enabled. Dec 16 13:12:14.864390 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 13:12:14.864401 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:12:14.864411 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:12:14.864422 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:12:14.864432 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:12:14.864528 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 13:12:14.864539 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:12:14.864553 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:12:14.864564 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:12:14.864574 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 13:12:14.864585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:12:14.864595 kernel: Console: colour dummy device 80x25 Dec 16 13:12:14.864606 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:12:14.864617 kernel: ACPI: Core revision 20240827 Dec 16 13:12:14.864627 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:12:14.864638 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:12:14.864651 kernel: x2apic enabled Dec 16 13:12:14.864662 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:12:14.864672 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:12:14.864683 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:12:14.864694 kernel: kvm-guest: setup PV IPIs Dec 16 13:12:14.864704 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:12:14.864715 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 13:12:14.864726 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 16 13:12:14.864736 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:12:14.864750 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:12:14.864761 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:12:14.864771 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:12:14.864782 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:12:14.864793 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:12:14.864815 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 13:12:14.864826 kernel: active return thunk: retbleed_return_thunk Dec 16 13:12:14.864836 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 13:12:14.864850 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:12:14.864861 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:12:14.864871 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:12:14.864883 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:12:14.864894 kernel: active return thunk: srso_return_thunk Dec 16 13:12:14.864904 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:12:14.864915 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:12:14.864925 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:12:14.864936 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:12:14.864950 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:12:14.864961 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 13:12:14.864971 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:12:14.864981 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:12:14.864991 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:12:14.865002 kernel: landlock: Up and running. Dec 16 13:12:14.865012 kernel: SELinux: Initializing. Dec 16 13:12:14.865023 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:12:14.865034 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:12:14.865047 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 13:12:14.865058 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:12:14.865068 kernel: ... version: 0 Dec 16 13:12:14.865079 kernel: ... bit width: 48 Dec 16 13:12:14.865089 kernel: ... generic registers: 6 Dec 16 13:12:14.865100 kernel: ... value mask: 0000ffffffffffff Dec 16 13:12:14.865110 kernel: ... max period: 00007fffffffffff Dec 16 13:12:14.865121 kernel: ... fixed-purpose events: 0 Dec 16 13:12:14.865141 kernel: ... event mask: 000000000000003f Dec 16 13:12:14.865155 kernel: signal: max sigframe size: 1776 Dec 16 13:12:14.865165 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:12:14.865176 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:12:14.865187 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:12:14.865200 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:12:14.865211 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:12:14.865223 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 13:12:14.865233 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 13:12:14.865244 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 16 13:12:14.865258 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145388K reserved, 0K cma-reserved) Dec 16 13:12:14.865268 kernel: devtmpfs: initialized Dec 16 13:12:14.865279 kernel: x86/mm: Memory block size: 128MB Dec 16 13:12:14.865289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 16 13:12:14.865300 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 16 13:12:14.865311 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 16 13:12:14.865321 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 16 13:12:14.865332 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Dec 16 13:12:14.865343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 16 13:12:14.865356 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:12:14.865367 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 13:12:14.865377 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:12:14.865388 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:12:14.865398 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:12:14.865409 kernel: audit: type=2000 audit(1765890733.141:1): state=initialized audit_enabled=0 res=1 Dec 16 13:12:14.865419 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:12:14.865430 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:12:14.865457 kernel: cpuidle: using governor menu Dec 16 13:12:14.865468 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:12:14.865479 kernel: dca service started, version 1.12.1 Dec 16 13:12:14.865490 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 13:12:14.865500 kernel: PCI: Using configuration type 1 for base access Dec 16 13:12:14.865511 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:12:14.865522 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:12:14.865532 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:12:14.865543 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:12:14.865557 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:12:14.865568 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:12:14.865578 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:12:14.865589 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:12:14.865599 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:12:14.865610 kernel: ACPI: Interpreter enabled Dec 16 13:12:14.865620 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:12:14.865630 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:12:14.865641 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:12:14.865655 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:12:14.865666 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:12:14.865676 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:12:14.865900 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:12:14.866049 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:12:14.866202 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:12:14.866217 kernel: PCI host bridge to bus 0000:00 Dec 16 13:12:14.866364 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:12:14.866532 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:12:14.866752 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:12:14.866890 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 13:12:14.867024 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 13:12:14.867170 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:12:14.867301 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:12:14.867501 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:12:14.867705 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:12:14.867849 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 13:12:14.867993 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 13:12:14.868147 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:12:14.868292 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:12:14.868481 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:12:14.868637 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 13:12:14.868779 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 13:12:14.868918 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 13:12:14.869067 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:12:14.869219 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 13:12:14.869359 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 13:12:14.869519 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 13:12:14.869674 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:12:14.869821 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 13:12:14.869964 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 13:12:14.870105 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 13:12:14.870267 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 13:12:14.870430 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:12:14.870610 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:12:14.870772 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:12:14.870921 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 13:12:14.871068 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 13:12:14.871239 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:12:14.871395 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 13:12:14.871412 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:12:14.871427 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:12:14.871452 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:12:14.871463 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:12:14.871474 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:12:14.871485 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:12:14.871495 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:12:14.871506 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:12:14.871516 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:12:14.871527 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:12:14.871540 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:12:14.871551 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:12:14.871561 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:12:14.871572 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:12:14.871582 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:12:14.871593 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:12:14.871603 kernel: iommu: Default domain type: Translated Dec 16 13:12:14.871614 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:12:14.871624 kernel: efivars: Registered efivars operations Dec 16 13:12:14.871638 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:12:14.871648 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:12:14.871659 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 16 13:12:14.871669 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 16 13:12:14.871680 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Dec 16 13:12:14.871690 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Dec 16 13:12:14.871701 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 16 13:12:14.871711 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 16 13:12:14.871721 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Dec 16 13:12:14.871735 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 16 13:12:14.871892 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:12:14.872044 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:12:14.872204 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:12:14.872220 kernel: vgaarb: loaded Dec 16 13:12:14.872229 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:12:14.872237 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:12:14.872245 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:12:14.872256 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:12:14.872264 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:12:14.872273 kernel: pnp: PnP ACPI init Dec 16 13:12:14.872420 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 13:12:14.872450 kernel: pnp: PnP ACPI: found 6 devices Dec 16 13:12:14.872463 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:12:14.872474 kernel: NET: Registered PF_INET protocol family Dec 16 13:12:14.872485 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:12:14.872500 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:12:14.872511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:12:14.872521 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:12:14.872532 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:12:14.872543 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:12:14.872554 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:12:14.872565 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:12:14.872576 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:12:14.872587 kernel: NET: Registered PF_XDP protocol family Dec 16 13:12:14.872738 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 13:12:14.872877 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 13:12:14.873004 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:12:14.873138 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:12:14.873266 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:12:14.873393 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 13:12:14.873557 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 13:12:14.873688 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:12:14.873708 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:12:14.873719 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 13:12:14.873733 kernel: Initialise system trusted keyrings Dec 16 13:12:14.873743 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:12:14.873754 kernel: Key type asymmetric registered Dec 16 13:12:14.873766 kernel: Asymmetric key parser 'x509' registered Dec 16 13:12:14.873777 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:12:14.873787 kernel: io scheduler mq-deadline registered Dec 16 13:12:14.873798 kernel: io scheduler kyber registered Dec 16 13:12:14.873808 kernel: io scheduler bfq registered Dec 16 13:12:14.873819 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:12:14.873830 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:12:14.873842 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:12:14.873859 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 13:12:14.873879 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:12:14.873894 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:12:14.873905 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:12:14.873916 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:12:14.873926 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:12:14.874067 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 13:12:14.874083 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:12:14.874243 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 13:12:14.874384 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T13:12:14 UTC (1765890734) Dec 16 13:12:14.874555 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 13:12:14.874572 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:12:14.874587 kernel: efifb: probing for efifb Dec 16 13:12:14.874598 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 13:12:14.874609 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 13:12:14.874620 kernel: efifb: scrolling: redraw Dec 16 13:12:14.874630 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:12:14.874641 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 13:12:14.874655 kernel: fb0: EFI VGA frame buffer device Dec 16 13:12:14.874666 kernel: pstore: Using crash dump compression: deflate Dec 16 13:12:14.874677 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:12:14.874688 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:12:14.874699 kernel: Segment Routing with IPv6 Dec 16 13:12:14.874710 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:12:14.874721 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:12:14.874732 kernel: Key type dns_resolver registered Dec 16 13:12:14.874742 kernel: IPI shorthand broadcast: enabled Dec 16 13:12:14.874755 kernel: sched_clock: Marking stable (2746002476, 287909292)->(3198500122, -164588354) Dec 16 13:12:14.874766 kernel: registered taskstats version 1 Dec 16 13:12:14.874777 kernel: Loading compiled-in X.509 certificates Dec 16 13:12:14.874788 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:12:14.874799 kernel: Demotion targets for Node 0: null Dec 16 13:12:14.874810 kernel: Key type .fscrypt registered Dec 16 13:12:14.874821 kernel: Key type fscrypt-provisioning registered Dec 16 13:12:14.874831 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:12:14.874843 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:12:14.874856 kernel: ima: No architecture policies found Dec 16 13:12:14.874867 kernel: clk: Disabling unused clocks Dec 16 13:12:14.874878 kernel: Warning: unable to open an initial console. Dec 16 13:12:14.874889 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:12:14.874900 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:12:14.874911 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:12:14.874921 kernel: Run /init as init process Dec 16 13:12:14.874932 kernel: with arguments: Dec 16 13:12:14.874942 kernel: /init Dec 16 13:12:14.874955 kernel: with environment: Dec 16 13:12:14.874965 kernel: HOME=/ Dec 16 13:12:14.874976 kernel: TERM=linux Dec 16 13:12:14.874989 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:12:14.875004 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:12:14.875016 systemd[1]: Detected virtualization kvm. Dec 16 13:12:14.875028 systemd[1]: Detected architecture x86-64. Dec 16 13:12:14.875039 systemd[1]: Running in initrd. Dec 16 13:12:14.875053 systemd[1]: No hostname configured, using default hostname. Dec 16 13:12:14.875066 systemd[1]: Hostname set to . Dec 16 13:12:14.875077 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:12:14.875088 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:12:14.875099 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:12:14.875111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:12:14.875131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:12:14.875143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:12:14.875158 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:12:14.875168 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:12:14.875179 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:12:14.875188 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:12:14.875196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:12:14.875205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:12:14.875214 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:12:14.875224 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:12:14.875233 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:12:14.875242 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:12:14.875250 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:12:14.875259 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:12:14.875268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:12:14.875276 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:12:14.875285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:12:14.875296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:12:14.875305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:12:14.875314 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:12:14.875324 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:12:14.875333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:12:14.875341 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:12:14.875350 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:12:14.875359 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:12:14.875368 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:12:14.875378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:12:14.875387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:14.875396 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:12:14.875405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:12:14.875456 systemd-journald[201]: Collecting audit messages is disabled. Dec 16 13:12:14.875486 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:12:14.875497 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:12:14.875506 systemd-journald[201]: Journal started Dec 16 13:12:14.875532 systemd-journald[201]: Runtime Journal (/run/log/journal/a98062577f7541ad837451e9021c4cb0) is 6M, max 48.1M, 42.1M free. Dec 16 13:12:14.863619 systemd-modules-load[204]: Inserted module 'overlay' Dec 16 13:12:14.883670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:14.887461 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:12:14.892330 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:12:14.899360 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:12:14.899394 kernel: Bridge firewalling registered Dec 16 13:12:14.899286 systemd-modules-load[204]: Inserted module 'br_netfilter' Dec 16 13:12:14.901275 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:12:14.912622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:12:14.912931 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:12:14.915068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:12:14.916075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:12:14.933388 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:12:14.934689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:14.937349 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:12:14.939302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:12:14.941420 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:12:14.956644 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:12:14.961828 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:12:14.987557 systemd-resolved[239]: Positive Trust Anchors: Dec 16 13:12:14.987572 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:12:14.987608 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:12:14.990363 systemd-resolved[239]: Defaulting to hostname 'linux'. Dec 16 13:12:14.991535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:12:15.003301 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:12:15.011868 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:12:15.110477 kernel: SCSI subsystem initialized Dec 16 13:12:15.120468 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:12:15.130467 kernel: iscsi: registered transport (tcp) Dec 16 13:12:15.151471 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:12:15.151510 kernel: QLogic iSCSI HBA Driver Dec 16 13:12:15.172617 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:12:15.201912 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:12:15.203697 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:12:15.254220 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:12:15.255733 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:12:15.321473 kernel: raid6: avx2x4 gen() 30424 MB/s Dec 16 13:12:15.338460 kernel: raid6: avx2x2 gen() 31152 MB/s Dec 16 13:12:15.356302 kernel: raid6: avx2x1 gen() 25111 MB/s Dec 16 13:12:15.356325 kernel: raid6: using algorithm avx2x2 gen() 31152 MB/s Dec 16 13:12:15.374317 kernel: raid6: .... xor() 18557 MB/s, rmw enabled Dec 16 13:12:15.374398 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:12:15.395465 kernel: xor: automatically using best checksumming function avx Dec 16 13:12:15.556482 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:12:15.563724 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:12:15.565456 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:12:15.610098 systemd-udevd[453]: Using default interface naming scheme 'v255'. Dec 16 13:12:15.616369 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:12:15.622385 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:12:15.649040 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Dec 16 13:12:15.681828 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:12:15.685781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:12:15.774034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:12:15.779554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:12:15.823557 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:12:15.825458 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 13:12:15.837638 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 13:12:15.845397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:12:15.845427 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:12:15.845451 kernel: GPT:9289727 != 19775487 Dec 16 13:12:15.850259 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:12:15.850303 kernel: GPT:9289727 != 19775487 Dec 16 13:12:15.850314 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:12:15.850324 kernel: libata version 3.00 loaded. Dec 16 13:12:15.850335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:12:15.861461 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:12:15.861688 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:12:15.861848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:12:15.862310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:15.869472 kernel: AES CTR mode by8 optimization enabled Dec 16 13:12:15.869841 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:15.875983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:15.878560 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:12:15.878817 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:12:15.879060 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:12:15.881044 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:12:15.888469 kernel: scsi host0: ahci Dec 16 13:12:15.892485 kernel: scsi host1: ahci Dec 16 13:12:15.895122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:12:15.895291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:15.907360 kernel: scsi host2: ahci Dec 16 13:12:15.900922 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:12:15.909679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:15.919490 kernel: scsi host3: ahci Dec 16 13:12:15.922459 kernel: scsi host4: ahci Dec 16 13:12:15.922710 kernel: scsi host5: ahci Dec 16 13:12:15.924888 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Dec 16 13:12:15.924915 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Dec 16 13:12:15.928486 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Dec 16 13:12:15.928528 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Dec 16 13:12:15.931240 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Dec 16 13:12:15.931261 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Dec 16 13:12:15.932302 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 13:12:15.948275 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 13:12:15.948348 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 13:12:15.956991 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 13:12:15.961641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:15.979896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:12:15.983912 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:12:16.016015 disk-uuid[617]: Primary Header is updated. Dec 16 13:12:16.016015 disk-uuid[617]: Secondary Entries is updated. Dec 16 13:12:16.016015 disk-uuid[617]: Secondary Header is updated. Dec 16 13:12:16.021458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:12:16.026480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:12:16.246071 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:12:16.246159 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:12:16.246470 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 13:12:16.248660 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:12:16.249460 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:12:16.250472 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:12:16.251872 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 13:12:16.251885 kernel: ata3.00: applying bridge limits Dec 16 13:12:16.253462 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:12:16.254469 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:12:16.255873 kernel: ata3.00: configured for UDMA/100 Dec 16 13:12:16.256482 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 13:12:16.324427 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 13:12:16.324759 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:12:16.345583 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 13:12:16.802175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:12:16.803040 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:12:16.807765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:12:16.811649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:12:16.816537 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:12:16.846196 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:12:17.090581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:12:17.090651 disk-uuid[618]: The operation has completed successfully. Dec 16 13:12:17.118268 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:12:17.118392 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:12:17.152789 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:12:17.166833 sh[646]: Success Dec 16 13:12:17.185174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:12:17.185238 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:12:17.185263 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:12:17.196463 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:12:17.222847 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:12:17.224833 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:12:17.235045 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:12:17.244259 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (659) Dec 16 13:12:17.244285 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:12:17.244296 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:17.248044 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:12:17.248068 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:12:17.249317 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:12:17.249880 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:12:17.253787 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:12:17.254568 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:12:17.261526 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:12:17.285467 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (692) Dec 16 13:12:17.288824 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:17.288846 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:17.292204 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:12:17.292226 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:12:17.297452 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:17.298299 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:12:17.301707 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:12:17.386148 ignition[737]: Ignition 2.22.0 Dec 16 13:12:17.386160 ignition[737]: Stage: fetch-offline Dec 16 13:12:17.393207 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:12:17.386196 ignition[737]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:17.386204 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:12:17.386290 ignition[737]: parsed url from cmdline: "" Dec 16 13:12:17.386294 ignition[737]: no config URL provided Dec 16 13:12:17.386299 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:12:17.386307 ignition[737]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:12:17.386329 ignition[737]: op(1): [started] loading QEMU firmware config module Dec 16 13:12:17.386341 ignition[737]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 13:12:17.400460 ignition[737]: op(1): [finished] loading QEMU firmware config module Dec 16 13:12:17.450949 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:12:17.503401 systemd-networkd[837]: lo: Link UP Dec 16 13:12:17.503412 systemd-networkd[837]: lo: Gained carrier Dec 16 13:12:17.504943 systemd-networkd[837]: Enumeration completed Dec 16 13:12:17.505041 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:12:17.505316 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:17.505320 systemd-networkd[837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:12:17.505723 systemd-networkd[837]: eth0: Link UP Dec 16 13:12:17.506162 systemd-networkd[837]: eth0: Gained carrier Dec 16 13:12:17.506171 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:17.509869 systemd[1]: Reached target network.target - Network. Dec 16 13:12:17.538525 systemd-networkd[837]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:12:17.560109 ignition[737]: parsing config with SHA512: e3ebae032e5b853b11183064d12b20b58a153c0cd02ab65e37f23d8925afecbc1e3edb8fbe22d5b061efeadd686566216a202aed62860b5093edeac55e339aaf Dec 16 13:12:17.565745 unknown[737]: fetched base config from "system" Dec 16 13:12:17.565921 unknown[737]: fetched user config from "qemu" Dec 16 13:12:17.566298 ignition[737]: fetch-offline: fetch-offline passed Dec 16 13:12:17.566369 ignition[737]: Ignition finished successfully Dec 16 13:12:17.572603 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:12:17.576483 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 13:12:17.579756 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:12:17.621959 ignition[842]: Ignition 2.22.0 Dec 16 13:12:17.621972 ignition[842]: Stage: kargs Dec 16 13:12:17.622116 ignition[842]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:17.622126 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:12:17.622848 ignition[842]: kargs: kargs passed Dec 16 13:12:17.622890 ignition[842]: Ignition finished successfully Dec 16 13:12:17.629335 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:12:17.632530 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:12:17.671208 ignition[850]: Ignition 2.22.0 Dec 16 13:12:17.671223 ignition[850]: Stage: disks Dec 16 13:12:17.671389 ignition[850]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:17.671402 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:12:17.672143 ignition[850]: disks: disks passed Dec 16 13:12:17.672189 ignition[850]: Ignition finished successfully Dec 16 13:12:17.677924 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:12:17.679043 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:12:17.685140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:12:17.685251 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:12:17.688992 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:12:17.689799 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:12:17.698626 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:12:17.734246 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.147 Dec 16 13:12:17.734259 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Dec 16 13:12:17.735587 systemd-fsck[860]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:12:17.896635 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:12:17.905509 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:12:18.021466 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:12:18.022512 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:12:18.023164 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:12:18.028010 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:12:18.031188 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:12:18.033873 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:12:18.033914 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:12:18.033935 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:12:18.093574 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:12:18.097755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:12:18.107836 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Dec 16 13:12:18.107856 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:18.107867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:18.107877 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:12:18.107887 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:12:18.110092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:12:18.140934 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:12:18.146520 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:12:18.150418 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:12:18.155626 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:12:18.237850 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:12:18.240636 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:12:18.244429 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:12:18.258839 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:12:18.261350 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:18.271221 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:12:18.286176 ignition[982]: INFO : Ignition 2.22.0 Dec 16 13:12:18.286176 ignition[982]: INFO : Stage: mount Dec 16 13:12:18.288702 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:18.288702 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:12:18.288702 ignition[982]: INFO : mount: mount passed Dec 16 13:12:18.288702 ignition[982]: INFO : Ignition finished successfully Dec 16 13:12:18.291431 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:12:18.295765 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:12:18.325407 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:12:18.350583 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Dec 16 13:12:18.350616 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:18.350632 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:18.355414 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:12:18.355434 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:12:18.357129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:12:18.392124 ignition[1011]: INFO : Ignition 2.22.0 Dec 16 13:12:18.392124 ignition[1011]: INFO : Stage: files Dec 16 13:12:18.394773 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:18.394773 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:12:18.394773 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:12:18.394773 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:12:18.394773 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:12:18.404682 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:12:18.406992 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:12:18.409374 unknown[1011]: wrote ssh authorized keys file for user: core Dec 16 13:12:18.411059 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:12:18.414139 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:12:18.417400 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:12:18.455928 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:12:18.568253 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:12:18.568253 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:12:18.574605 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:12:18.800458 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:12:18.951221 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:12:18.951221 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:12:18.958007 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:18.988240 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:18.988240 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:18.988240 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:12:19.141639 systemd-networkd[837]: eth0: Gained IPv6LL Dec 16 13:12:19.340965 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:12:19.854134 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:19.854134 ignition[1011]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:12:19.859900 ignition[1011]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:12:20.121576 ignition[1011]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:12:20.121576 ignition[1011]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:12:20.121576 ignition[1011]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 13:12:20.121576 ignition[1011]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:12:20.131862 ignition[1011]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:12:20.131862 ignition[1011]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 13:12:20.131862 ignition[1011]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 13:12:20.156256 ignition[1011]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:12:20.160424 ignition[1011]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:12:20.162964 ignition[1011]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 13:12:20.162964 ignition[1011]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:12:20.162964 ignition[1011]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:12:20.162964 ignition[1011]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:12:20.162964 ignition[1011]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:12:20.162964 ignition[1011]: INFO : files: files passed Dec 16 13:12:20.162964 ignition[1011]: INFO : Ignition finished successfully Dec 16 13:12:20.170710 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:12:20.178603 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:12:20.183069 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:12:20.201455 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:12:20.201610 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:12:20.207894 initrd-setup-root-after-ignition[1041]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 13:12:20.210235 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:12:20.210235 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:12:20.215606 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:12:20.217436 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:12:20.220322 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:12:20.223045 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:12:20.273840 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:12:20.274007 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:12:20.277780 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:12:20.282854 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:12:20.284633 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:12:20.286369 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:12:20.327819 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:12:20.329228 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:12:20.355685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:12:20.355868 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:12:20.361380 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:12:20.363243 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:12:20.363363 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:12:20.369516 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:12:20.373006 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:12:20.376098 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:12:20.379245 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:12:20.382798 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:12:20.384643 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:12:20.388091 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:12:20.393058 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:12:20.396715 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:12:20.400357 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:12:20.403619 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:12:20.405236 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:12:20.405392 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:12:20.412498 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:12:20.412675 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:12:20.416021 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:12:20.419779 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:12:20.421379 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:12:20.421554 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:12:20.428741 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:12:20.428873 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:12:20.430609 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:12:20.433887 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:12:20.440539 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:12:20.442674 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:12:20.446606 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:12:20.449614 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:12:20.449714 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:12:20.452702 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:12:20.452785 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:12:20.454195 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:12:20.454309 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:12:20.459279 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:12:20.459381 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:12:20.465863 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:12:20.472655 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:12:20.474198 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:12:20.474322 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:12:20.477187 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:12:20.477288 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:12:20.491709 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:12:20.491846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:12:20.503770 ignition[1067]: INFO : Ignition 2.22.0 Dec 16 13:12:20.503770 ignition[1067]: INFO : Stage: umount Dec 16 13:12:20.506624 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:20.506624 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:12:20.506624 ignition[1067]: INFO : umount: umount passed Dec 16 13:12:20.506624 ignition[1067]: INFO : Ignition finished successfully Dec 16 13:12:20.507640 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:12:20.507774 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:12:20.508966 systemd[1]: Stopped target network.target - Network. Dec 16 13:12:20.514730 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:12:20.514844 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:12:20.518029 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:12:20.518108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:12:20.519585 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:12:20.519672 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:12:20.525057 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:12:20.525147 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:12:20.526700 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:12:20.529905 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:12:20.534623 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:12:20.547653 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:12:20.547789 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:12:20.553544 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:12:20.554060 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:12:20.554279 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:12:20.563579 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:12:20.564258 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:12:20.566298 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:12:20.566385 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:12:20.569928 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:12:20.575232 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:12:20.577057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:12:20.584375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:12:20.584476 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:20.589334 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:12:20.589387 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:12:20.645057 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:12:20.645114 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:12:20.647425 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:12:20.653058 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:12:20.653158 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:12:20.673155 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:12:20.673282 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:12:20.682340 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:12:20.682578 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:12:20.684219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:12:20.684273 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:12:20.687917 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:12:20.687961 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:12:20.691171 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:12:20.691229 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:12:20.699219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:12:20.699285 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:12:20.703910 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:12:20.703962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:12:20.709665 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:12:20.710539 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:12:20.710592 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:12:20.718613 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:12:20.718658 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:12:20.751212 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:12:20.751257 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:12:20.757007 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:12:20.757054 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:12:20.759368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:12:20.759413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:20.768355 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:12:20.768412 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:12:20.768473 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:12:20.768521 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:12:20.771407 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:12:20.771528 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:12:21.169870 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:12:21.170025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:12:21.173378 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:12:21.176259 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:12:21.176315 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:12:21.180587 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:12:21.200302 systemd[1]: Switching root. Dec 16 13:12:21.243291 systemd-journald[201]: Journal stopped Dec 16 13:12:23.200401 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 16 13:12:23.200499 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:12:23.200519 kernel: SELinux: policy capability open_perms=1 Dec 16 13:12:23.200535 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:12:23.200555 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:12:23.200572 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:12:23.200590 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:12:23.200610 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:12:23.200626 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:12:23.200648 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:12:23.200677 kernel: audit: type=1403 audit(1765890742.167:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:12:23.200694 systemd[1]: Successfully loaded SELinux policy in 66.907ms. Dec 16 13:12:23.200713 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.245ms. Dec 16 13:12:23.200730 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:12:23.200749 systemd[1]: Detected virtualization kvm. Dec 16 13:12:23.200766 systemd[1]: Detected architecture x86-64. Dec 16 13:12:23.200781 systemd[1]: Detected first boot. Dec 16 13:12:23.200798 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:12:23.200819 zram_generator::config[1114]: No configuration found. Dec 16 13:12:23.200836 kernel: Guest personality initialized and is inactive Dec 16 13:12:23.200850 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:12:23.200864 kernel: Initialized host personality Dec 16 13:12:23.200879 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:12:23.200907 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:12:23.200927 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:12:23.200944 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:12:23.200959 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:12:23.200974 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:12:23.200990 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:12:23.201005 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:12:23.201021 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:12:23.201041 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:12:23.201063 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:12:23.201078 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:12:23.201094 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:12:23.201109 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:12:23.201125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:12:23.201140 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:12:23.201156 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:12:23.201179 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:12:23.201203 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:12:23.201220 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:12:23.201236 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:12:23.201252 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:12:23.201266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:12:23.201282 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:12:23.201297 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:12:23.201312 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:12:23.201331 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:12:23.201346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:12:23.201363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:12:23.201378 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:12:23.201396 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:12:23.201412 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:12:23.201428 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:12:23.201509 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:12:23.201527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:12:23.201547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:12:23.201563 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:12:23.201578 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:12:23.201594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:12:23.201610 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:12:23.201626 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:12:23.201642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:23.201657 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:12:23.201672 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:12:23.201691 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:12:23.201706 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:12:23.201722 systemd[1]: Reached target machines.target - Containers. Dec 16 13:12:23.201744 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:12:23.201769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:23.201786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:12:23.201809 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:12:23.201833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:12:23.201872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:12:23.201888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:12:23.201916 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:12:23.201932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:12:23.201949 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:12:23.201965 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:12:23.201981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:12:23.201997 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:12:23.202013 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:12:23.202033 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:23.202050 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:12:23.202066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:12:23.202081 kernel: loop: module loaded Dec 16 13:12:23.202097 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:12:23.202139 systemd-journald[1178]: Collecting audit messages is disabled. Dec 16 13:12:23.202168 systemd-journald[1178]: Journal started Dec 16 13:12:23.202200 systemd-journald[1178]: Runtime Journal (/run/log/journal/a98062577f7541ad837451e9021c4cb0) is 6M, max 48.1M, 42.1M free. Dec 16 13:12:23.231798 kernel: fuse: init (API version 7.41) Dec 16 13:12:23.231912 kernel: ACPI: bus type drm_connector registered Dec 16 13:12:22.864898 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:12:22.887336 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 13:12:22.887790 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:12:23.236470 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:12:23.247470 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:12:23.257727 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:12:23.261051 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:12:23.261128 systemd[1]: Stopped verity-setup.service. Dec 16 13:12:23.266470 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:23.272287 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:12:23.273087 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:12:23.274970 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:12:23.276953 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:12:23.278729 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:12:23.280708 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:12:23.282711 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:12:23.284679 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:12:23.286985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:12:23.289407 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:12:23.289648 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:12:23.291943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:12:23.292143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:12:23.294366 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:12:23.294827 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:12:23.296939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:12:23.297149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:12:23.299536 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:12:23.299738 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:12:23.301968 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:12:23.302177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:12:23.304345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:12:23.306779 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:12:23.309204 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:12:23.311920 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:12:23.325283 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:12:23.328579 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:12:23.331598 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:12:23.333514 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:12:23.333551 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:12:23.336318 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:12:23.341436 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:12:23.343322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:23.345758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:12:23.349972 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:12:23.352138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:12:23.353326 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:12:23.355605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:12:23.359571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:12:23.367598 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:12:23.370596 systemd-journald[1178]: Time spent on flushing to /var/log/journal/a98062577f7541ad837451e9021c4cb0 is 35.647ms for 1079 entries. Dec 16 13:12:23.370596 systemd-journald[1178]: System Journal (/var/log/journal/a98062577f7541ad837451e9021c4cb0) is 8M, max 195.6M, 187.6M free. Dec 16 13:12:23.428691 systemd-journald[1178]: Received client request to flush runtime journal. Dec 16 13:12:23.428743 kernel: loop0: detected capacity change from 0 to 128560 Dec 16 13:12:23.428768 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:12:23.373588 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:12:23.378269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:12:23.381324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:12:23.383529 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:12:23.392214 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:12:23.399340 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:12:23.404580 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:12:23.413226 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:23.413457 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Dec 16 13:12:23.413476 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Dec 16 13:12:23.422567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:12:23.427325 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:12:23.433042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:12:23.442361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:12:23.443463 kernel: loop1: detected capacity change from 0 to 219144 Dec 16 13:12:23.444246 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:12:23.468298 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:12:23.470544 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 13:12:23.473880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:12:23.494998 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Dec 16 13:12:23.495343 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Dec 16 13:12:23.497492 kernel: loop3: detected capacity change from 0 to 128560 Dec 16 13:12:23.499608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:12:23.511463 kernel: loop4: detected capacity change from 0 to 219144 Dec 16 13:12:23.520465 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 13:12:23.528053 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 13:12:23.529463 (sd-merge)[1260]: Merged extensions into '/usr'. Dec 16 13:12:23.534573 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:12:23.534588 systemd[1]: Reloading... Dec 16 13:12:23.583473 zram_generator::config[1286]: No configuration found. Dec 16 13:12:23.685873 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:12:23.780900 systemd[1]: Reloading finished in 245 ms. Dec 16 13:12:23.807854 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:12:23.810272 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:12:23.826851 systemd[1]: Starting ensure-sysext.service... Dec 16 13:12:23.829248 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:12:23.839755 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:12:23.839775 systemd[1]: Reloading... Dec 16 13:12:23.845792 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:12:23.846299 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:12:23.846736 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:12:23.847083 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:12:23.848021 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:12:23.848333 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:12:23.848485 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:12:23.852692 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:12:23.852765 systemd-tmpfiles[1326]: Skipping /boot Dec 16 13:12:23.863155 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:12:23.863295 systemd-tmpfiles[1326]: Skipping /boot Dec 16 13:12:23.911485 zram_generator::config[1356]: No configuration found. Dec 16 13:12:24.072618 systemd[1]: Reloading finished in 232 ms. Dec 16 13:12:24.094997 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:12:24.118972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:12:24.128905 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:12:24.131856 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:12:24.141429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:12:24.145766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:12:24.150615 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:12:24.153698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:12:24.158649 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:24.158831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:24.162722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:12:24.165895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:12:24.169621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:12:24.171414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:24.171529 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:24.173796 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:12:24.176911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:24.178296 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:12:24.182121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:12:24.182325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:12:24.185142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:12:24.185346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:12:24.190024 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:12:24.190430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:12:24.199070 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Dec 16 13:12:24.199641 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:24.200068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:24.201690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:12:24.206786 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:12:24.215237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:12:24.217363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:24.217539 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:24.219539 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:12:24.221216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:24.223226 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:12:24.226990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:12:24.227194 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:12:24.229839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:12:24.230113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:12:24.232898 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:12:24.235137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:12:24.237985 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:12:24.238200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:12:24.242721 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:12:24.252359 augenrules[1429]: No rules Dec 16 13:12:24.253341 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:12:24.254130 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:12:24.256417 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:12:24.281662 systemd[1]: Finished ensure-sysext.service. Dec 16 13:12:24.285111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:24.286971 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:12:24.288553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:24.291548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:12:24.301650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:12:24.305221 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:12:24.308125 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:12:24.309849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:24.309895 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:24.312165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:12:24.319875 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:12:24.321887 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:12:24.321914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:24.322547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:12:24.322762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:12:24.325241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:12:24.325509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:12:24.327960 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:12:24.328167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:12:24.330296 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:12:24.330653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:12:24.341202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:12:24.343084 augenrules[1474]: /sbin/augenrules: No change Dec 16 13:12:24.343025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:12:24.345222 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:12:24.356487 augenrules[1502]: No rules Dec 16 13:12:24.356355 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:12:24.357099 systemd-resolved[1396]: Positive Trust Anchors: Dec 16 13:12:24.357604 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:12:24.357695 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:12:24.357765 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:12:24.363245 systemd-resolved[1396]: Defaulting to hostname 'linux'. Dec 16 13:12:24.368158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:12:24.371430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:12:24.412769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:12:24.417469 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:12:24.418482 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:12:24.444830 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:12:24.445253 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:12:24.452467 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:12:24.462547 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 13:12:24.466048 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:12:24.466255 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:12:24.479979 systemd-networkd[1483]: lo: Link UP Dec 16 13:12:24.480284 systemd-networkd[1483]: lo: Gained carrier Dec 16 13:12:24.482871 systemd-networkd[1483]: Enumeration completed Dec 16 13:12:24.482996 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:12:24.483289 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:24.483294 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:12:24.484542 systemd-networkd[1483]: eth0: Link UP Dec 16 13:12:24.485262 systemd-networkd[1483]: eth0: Gained carrier Dec 16 13:12:24.485338 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:24.485361 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:12:24.487626 systemd[1]: Reached target network.target - Network. Dec 16 13:12:24.489186 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:12:24.491086 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:12:24.493508 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:12:24.495797 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:12:24.498358 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:12:24.500524 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:12:24.500537 systemd-networkd[1483]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:12:24.500571 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:12:24.502087 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:12:24.503058 systemd-timesyncd[1486]: Network configuration changed, trying to establish connection. Dec 16 13:12:26.182225 systemd-resolved[1396]: Clock change detected. Flushing caches. Dec 16 13:12:26.182359 systemd-timesyncd[1486]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 13:12:26.182413 systemd-timesyncd[1486]: Initial clock synchronization to Tue 2025-12-16 13:12:26.182187 UTC. Dec 16 13:12:26.182433 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:12:26.184318 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:12:26.186895 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:12:26.189688 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:12:26.193432 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:12:26.198329 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:12:26.200530 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:12:26.202581 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:12:26.209709 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:12:26.213417 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:12:26.248300 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:12:26.252074 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:12:26.254544 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:12:26.266518 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:12:26.268584 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:12:26.270349 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:12:26.270485 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:12:26.280266 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:12:26.284039 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:12:26.287263 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:12:26.290713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:12:26.342203 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:12:26.343899 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:12:26.347324 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:12:26.351768 jq[1545]: false Dec 16 13:12:26.352106 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:12:26.353222 kernel: kvm_amd: TSC scaling supported Dec 16 13:12:26.353263 kernel: kvm_amd: Nested Virtualization enabled Dec 16 13:12:26.353276 kernel: kvm_amd: Nested Paging enabled Dec 16 13:12:26.353288 kernel: kvm_amd: LBR virtualization supported Dec 16 13:12:26.353300 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 13:12:26.358177 kernel: kvm_amd: Virtual GIF supported Dec 16 13:12:26.361497 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:12:26.365708 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:12:26.368934 extend-filesystems[1546]: Found /dev/vda6 Dec 16 13:12:26.370267 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:12:26.372119 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing passwd entry cache Dec 16 13:12:26.372140 oslogin_cache_refresh[1547]: Refreshing passwd entry cache Dec 16 13:12:26.378474 extend-filesystems[1546]: Found /dev/vda9 Dec 16 13:12:26.380865 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting users, quitting Dec 16 13:12:26.380865 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:12:26.380865 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing group entry cache Dec 16 13:12:26.380223 oslogin_cache_refresh[1547]: Failure getting users, quitting Dec 16 13:12:26.380255 oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:12:26.380309 oslogin_cache_refresh[1547]: Refreshing group entry cache Dec 16 13:12:26.381399 extend-filesystems[1546]: Checking size of /dev/vda9 Dec 16 13:12:26.384277 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:12:26.386733 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:12:26.386744 oslogin_cache_refresh[1547]: Failure getting groups, quitting Dec 16 13:12:26.388141 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting groups, quitting Dec 16 13:12:26.388141 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:12:26.386754 oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:12:26.388420 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:12:26.392089 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:12:26.394976 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:12:26.417663 jq[1566]: true Dec 16 13:12:26.418329 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:12:26.426370 update_engine[1564]: I20251216 13:12:26.426067 1564 main.cc:92] Flatcar Update Engine starting Dec 16 13:12:26.428024 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:12:26.428882 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:12:26.431347 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:12:26.432358 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:12:26.432937 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:12:26.438377 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:12:26.441059 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:12:26.441774 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:12:26.445245 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:12:26.445570 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:12:26.451111 extend-filesystems[1546]: Resized partition /dev/vda9 Dec 16 13:12:26.453598 extend-filesystems[1578]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:12:26.472562 jq[1575]: true Dec 16 13:12:26.473745 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:12:26.474015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:26.561857 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 13:12:26.590667 systemd-logind[1561]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:12:26.591068 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:12:26.592191 systemd-logind[1561]: New seat seat0. Dec 16 13:12:26.593621 dbus-daemon[1543]: [system] SELinux support is enabled Dec 16 13:12:26.593807 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:12:26.597424 update_engine[1564]: I20251216 13:12:26.596846 1564 update_check_scheduler.cc:74] Next update check in 3m49s Dec 16 13:12:26.599341 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:12:26.599366 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:12:26.600266 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:12:26.600285 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:12:26.601059 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:12:26.601976 tar[1574]: linux-amd64/LICENSE Dec 16 13:12:26.602221 tar[1574]: linux-amd64/helm Dec 16 13:12:26.603652 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:12:26.606855 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:12:26.640848 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 13:12:27.061580 containerd[1576]: time="2025-12-16T13:12:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:12:26.658529 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:12:26.676494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:27.062701 containerd[1576]: time="2025-12-16T13:12:27.062594613Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:12:27.064465 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:12:27.065840 extend-filesystems[1578]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 13:12:27.065840 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 13:12:27.065840 extend-filesystems[1578]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 13:12:27.075944 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072545658Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.685µs" Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072574372Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072590202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072739672Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072751955Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072772083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072858525Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.072868895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.073101371Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.073114345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.073123492Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077403 containerd[1576]: time="2025-12-16T13:12:27.073131247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:12:27.066960 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:12:27.077699 bash[1607]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:12:27.077880 containerd[1576]: time="2025-12-16T13:12:27.073217308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077880 containerd[1576]: time="2025-12-16T13:12:27.073449293Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077880 containerd[1576]: time="2025-12-16T13:12:27.073479290Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:12:27.077880 containerd[1576]: time="2025-12-16T13:12:27.073489148Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:12:27.077880 containerd[1576]: time="2025-12-16T13:12:27.073532630Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:12:27.077880 containerd[1576]: time="2025-12-16T13:12:27.073835227Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:12:27.077880 containerd[1576]: time="2025-12-16T13:12:27.073899277Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:12:27.067228 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:12:27.078243 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:12:27.083473 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:12:27.084155 containerd[1576]: time="2025-12-16T13:12:27.084118426Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:12:27.084247 containerd[1576]: time="2025-12-16T13:12:27.084233441Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:12:27.084309 containerd[1576]: time="2025-12-16T13:12:27.084298012Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084374045Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084390035Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084400765Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084413188Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084424981Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084435019Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084445048Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084453674Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084465176Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084570684Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084586804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084600039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084614606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:12:27.084660 containerd[1576]: time="2025-12-16T13:12:27.084625036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:12:27.084939 containerd[1576]: time="2025-12-16T13:12:27.084634183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:12:27.085000 containerd[1576]: time="2025-12-16T13:12:27.084967528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:12:27.085000 containerd[1576]: time="2025-12-16T13:12:27.084989019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:12:27.085000 containerd[1576]: time="2025-12-16T13:12:27.084999558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:12:27.085113 containerd[1576]: time="2025-12-16T13:12:27.085009547Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:12:27.085113 containerd[1576]: time="2025-12-16T13:12:27.085021339Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:12:27.085113 containerd[1576]: time="2025-12-16T13:12:27.085061925Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:12:27.085113 containerd[1576]: time="2025-12-16T13:12:27.085078827Z" level=info msg="Start snapshots syncer" Dec 16 13:12:27.085189 containerd[1576]: time="2025-12-16T13:12:27.085116197Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:12:27.085463 containerd[1576]: time="2025-12-16T13:12:27.085419646Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:12:27.085560 containerd[1576]: time="2025-12-16T13:12:27.085468638Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:12:27.085560 containerd[1576]: time="2025-12-16T13:12:27.085505838Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:12:27.085624 containerd[1576]: time="2025-12-16T13:12:27.085605645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:12:27.085646 containerd[1576]: time="2025-12-16T13:12:27.085626274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:12:27.085646 containerd[1576]: time="2025-12-16T13:12:27.085635872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:12:27.085646 containerd[1576]: time="2025-12-16T13:12:27.085644488Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:12:27.085705 containerd[1576]: time="2025-12-16T13:12:27.085655879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:12:27.085705 containerd[1576]: time="2025-12-16T13:12:27.085667651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:12:27.085705 containerd[1576]: time="2025-12-16T13:12:27.085677510Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:12:27.085705 containerd[1576]: time="2025-12-16T13:12:27.085696746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:12:27.085783 containerd[1576]: time="2025-12-16T13:12:27.085706585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:12:27.085783 containerd[1576]: time="2025-12-16T13:12:27.085722003Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:12:27.085783 containerd[1576]: time="2025-12-16T13:12:27.085748914Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:12:27.085783 containerd[1576]: time="2025-12-16T13:12:27.085759804Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:12:27.085783 containerd[1576]: time="2025-12-16T13:12:27.085767328Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:12:27.085783 containerd[1576]: time="2025-12-16T13:12:27.085775824Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:12:27.085783 containerd[1576]: time="2025-12-16T13:12:27.085783068Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:12:27.085975 containerd[1576]: time="2025-12-16T13:12:27.085792215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:12:27.085975 containerd[1576]: time="2025-12-16T13:12:27.085807754Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:12:27.085975 containerd[1576]: time="2025-12-16T13:12:27.085838332Z" level=info msg="runtime interface created" Dec 16 13:12:27.085975 containerd[1576]: time="2025-12-16T13:12:27.085844774Z" level=info msg="created NRI interface" Dec 16 13:12:27.085975 containerd[1576]: time="2025-12-16T13:12:27.085851947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:12:27.085975 containerd[1576]: time="2025-12-16T13:12:27.085861074Z" level=info msg="Connect containerd service" Dec 16 13:12:27.085975 containerd[1576]: time="2025-12-16T13:12:27.085876523Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:12:27.087728 containerd[1576]: time="2025-12-16T13:12:27.086966368Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:12:27.098022 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:12:27.102000 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:12:27.125338 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:12:27.125595 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:12:27.129656 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:12:27.154879 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:12:27.162971 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:12:27.168078 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:12:27.170168 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:12:27.173599 containerd[1576]: time="2025-12-16T13:12:27.173541778Z" level=info msg="Start subscribing containerd event" Dec 16 13:12:27.173646 containerd[1576]: time="2025-12-16T13:12:27.173615747Z" level=info msg="Start recovering state" Dec 16 13:12:27.173759 containerd[1576]: time="2025-12-16T13:12:27.173735622Z" level=info msg="Start event monitor" Dec 16 13:12:27.173759 containerd[1576]: time="2025-12-16T13:12:27.173753686Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:12:27.173817 containerd[1576]: time="2025-12-16T13:12:27.173774064Z" level=info msg="Start streaming server" Dec 16 13:12:27.173817 containerd[1576]: time="2025-12-16T13:12:27.173783662Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:12:27.173817 containerd[1576]: time="2025-12-16T13:12:27.173791547Z" level=info msg="runtime interface starting up..." Dec 16 13:12:27.173817 containerd[1576]: time="2025-12-16T13:12:27.173797458Z" level=info msg="starting plugins..." Dec 16 13:12:27.173817 containerd[1576]: time="2025-12-16T13:12:27.173810843Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:12:27.174854 containerd[1576]: time="2025-12-16T13:12:27.174267189Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:12:27.174854 containerd[1576]: time="2025-12-16T13:12:27.174361856Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:12:27.174854 containerd[1576]: time="2025-12-16T13:12:27.174425957Z" level=info msg="containerd successfully booted in 0.270659s" Dec 16 13:12:27.174484 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:12:27.297621 tar[1574]: linux-amd64/README.md Dec 16 13:12:27.324356 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:12:27.604051 systemd-networkd[1483]: eth0: Gained IPv6LL Dec 16 13:12:27.606960 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:12:27.609585 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:12:27.612948 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 13:12:27.616073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:27.635178 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:12:27.661139 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:12:27.663725 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 13:12:27.664117 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 13:12:27.667233 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:12:28.323024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:28.325496 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:12:28.327424 systemd[1]: Startup finished in 2.810s (kernel) + 7.509s (initrd) + 4.545s (userspace) = 14.865s. Dec 16 13:12:28.341198 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:12:28.711410 kubelet[1684]: E1216 13:12:28.711353 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:12:28.715515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:12:28.715714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:12:28.716120 systemd[1]: kubelet.service: Consumed 906ms CPU time, 255.4M memory peak. Dec 16 13:12:30.107083 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:12:30.108283 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:34584.service - OpenSSH per-connection server daemon (10.0.0.1:34584). Dec 16 13:12:30.184401 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 34584 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:12:30.186153 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:30.192182 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:12:30.193293 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:12:30.199166 systemd-logind[1561]: New session 1 of user core. Dec 16 13:12:30.214151 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:12:30.216914 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:12:30.234147 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:12:30.236476 systemd-logind[1561]: New session c1 of user core. Dec 16 13:12:30.383642 systemd[1703]: Queued start job for default target default.target. Dec 16 13:12:30.406045 systemd[1703]: Created slice app.slice - User Application Slice. Dec 16 13:12:30.406069 systemd[1703]: Reached target paths.target - Paths. Dec 16 13:12:30.406109 systemd[1703]: Reached target timers.target - Timers. Dec 16 13:12:30.407535 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:12:30.418065 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:12:30.418187 systemd[1703]: Reached target sockets.target - Sockets. Dec 16 13:12:30.418225 systemd[1703]: Reached target basic.target - Basic System. Dec 16 13:12:30.418270 systemd[1703]: Reached target default.target - Main User Target. Dec 16 13:12:30.418300 systemd[1703]: Startup finished in 175ms. Dec 16 13:12:30.418435 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:12:30.419873 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:12:30.482411 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:34588.service - OpenSSH per-connection server daemon (10.0.0.1:34588). Dec 16 13:12:30.537535 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 34588 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:12:30.538862 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:30.542807 systemd-logind[1561]: New session 2 of user core. Dec 16 13:12:30.557977 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:12:30.611597 sshd[1717]: Connection closed by 10.0.0.1 port 34588 Dec 16 13:12:30.611998 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:30.624804 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:34588.service: Deactivated successfully. Dec 16 13:12:30.626688 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:12:30.627590 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:12:30.630408 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:34590.service - OpenSSH per-connection server daemon (10.0.0.1:34590). Dec 16 13:12:30.631192 systemd-logind[1561]: Removed session 2. Dec 16 13:12:30.683026 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 34590 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:12:30.684767 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:30.689289 systemd-logind[1561]: New session 3 of user core. Dec 16 13:12:30.702951 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:12:30.752500 sshd[1726]: Connection closed by 10.0.0.1 port 34590 Dec 16 13:12:30.752769 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:30.766730 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:34590.service: Deactivated successfully. Dec 16 13:12:30.768593 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:12:30.769432 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:12:30.772302 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:34592.service - OpenSSH per-connection server daemon (10.0.0.1:34592). Dec 16 13:12:30.772901 systemd-logind[1561]: Removed session 3. Dec 16 13:12:30.829997 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 34592 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:12:30.831127 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:30.835176 systemd-logind[1561]: New session 4 of user core. Dec 16 13:12:30.848958 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:12:30.901953 sshd[1735]: Connection closed by 10.0.0.1 port 34592 Dec 16 13:12:30.902205 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:30.914564 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:34592.service: Deactivated successfully. Dec 16 13:12:30.916296 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:12:30.917009 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:12:30.919471 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:34604.service - OpenSSH per-connection server daemon (10.0.0.1:34604). Dec 16 13:12:30.920369 systemd-logind[1561]: Removed session 4. Dec 16 13:12:30.981955 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 34604 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:12:30.983106 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:30.987321 systemd-logind[1561]: New session 5 of user core. Dec 16 13:12:30.997993 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:12:31.053416 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:12:31.053767 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:31.073680 sudo[1745]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:31.075347 sshd[1744]: Connection closed by 10.0.0.1 port 34604 Dec 16 13:12:31.075672 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:31.086554 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:34604.service: Deactivated successfully. Dec 16 13:12:31.088982 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:12:31.089869 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:12:31.093542 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:34612.service - OpenSSH per-connection server daemon (10.0.0.1:34612). Dec 16 13:12:31.094227 systemd-logind[1561]: Removed session 5. Dec 16 13:12:31.148814 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 34612 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:12:31.150036 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:31.154411 systemd-logind[1561]: New session 6 of user core. Dec 16 13:12:31.172331 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:12:31.225472 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:12:31.225752 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:31.233010 sudo[1756]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:31.239381 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:12:31.239690 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:31.249094 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:12:31.295445 augenrules[1778]: No rules Dec 16 13:12:31.297135 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:12:31.297399 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:12:31.298453 sudo[1755]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:31.300192 sshd[1754]: Connection closed by 10.0.0.1 port 34612 Dec 16 13:12:31.300553 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:31.313584 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:34612.service: Deactivated successfully. Dec 16 13:12:31.315150 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:12:31.315918 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:12:31.318174 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:34614.service - OpenSSH per-connection server daemon (10.0.0.1:34614). Dec 16 13:12:31.318913 systemd-logind[1561]: Removed session 6. Dec 16 13:12:31.378559 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 34614 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:12:31.379722 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:31.383487 systemd-logind[1561]: New session 7 of user core. Dec 16 13:12:31.393917 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:12:31.445866 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:12:31.446174 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:31.739506 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:12:31.758290 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:12:31.983993 dockerd[1812]: time="2025-12-16T13:12:31.983926395Z" level=info msg="Starting up" Dec 16 13:12:31.984736 dockerd[1812]: time="2025-12-16T13:12:31.984717559Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:12:31.997598 dockerd[1812]: time="2025-12-16T13:12:31.997471502Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:12:32.595978 dockerd[1812]: time="2025-12-16T13:12:32.595925280Z" level=info msg="Loading containers: start." Dec 16 13:12:32.607850 kernel: Initializing XFRM netlink socket Dec 16 13:12:33.021336 systemd-networkd[1483]: docker0: Link UP Dec 16 13:12:33.026716 dockerd[1812]: time="2025-12-16T13:12:33.026679722Z" level=info msg="Loading containers: done." Dec 16 13:12:33.040324 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1094083148-merged.mount: Deactivated successfully. Dec 16 13:12:33.042505 dockerd[1812]: time="2025-12-16T13:12:33.042450964Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:12:33.042576 dockerd[1812]: time="2025-12-16T13:12:33.042554058Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:12:33.042709 dockerd[1812]: time="2025-12-16T13:12:33.042680044Z" level=info msg="Initializing buildkit" Dec 16 13:12:33.074074 dockerd[1812]: time="2025-12-16T13:12:33.074024357Z" level=info msg="Completed buildkit initialization" Dec 16 13:12:33.080096 dockerd[1812]: time="2025-12-16T13:12:33.080050470Z" level=info msg="Daemon has completed initialization" Dec 16 13:12:33.080513 dockerd[1812]: time="2025-12-16T13:12:33.080132524Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:12:33.080251 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:12:33.571306 containerd[1576]: time="2025-12-16T13:12:33.571263656Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:12:34.300369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245415050.mount: Deactivated successfully. Dec 16 13:12:35.153156 containerd[1576]: time="2025-12-16T13:12:35.153102383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:35.154015 containerd[1576]: time="2025-12-16T13:12:35.153946306Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 16 13:12:35.155069 containerd[1576]: time="2025-12-16T13:12:35.155031882Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:35.157604 containerd[1576]: time="2025-12-16T13:12:35.157556578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:35.158503 containerd[1576]: time="2025-12-16T13:12:35.158460053Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.587150601s" Dec 16 13:12:35.158576 containerd[1576]: time="2025-12-16T13:12:35.158508664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:12:35.159143 containerd[1576]: time="2025-12-16T13:12:35.159062513Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:12:36.513960 containerd[1576]: time="2025-12-16T13:12:36.513892386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:36.514945 containerd[1576]: time="2025-12-16T13:12:36.514886270Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 16 13:12:36.516263 containerd[1576]: time="2025-12-16T13:12:36.516220473Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:36.518654 containerd[1576]: time="2025-12-16T13:12:36.518616317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:36.519504 containerd[1576]: time="2025-12-16T13:12:36.519469668Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.360376508s" Dec 16 13:12:36.519504 containerd[1576]: time="2025-12-16T13:12:36.519497430Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:12:36.520005 containerd[1576]: time="2025-12-16T13:12:36.519976138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:12:37.475237 containerd[1576]: time="2025-12-16T13:12:37.475186890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:37.475910 containerd[1576]: time="2025-12-16T13:12:37.475868458Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 16 13:12:37.477160 containerd[1576]: time="2025-12-16T13:12:37.477122541Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:37.479681 containerd[1576]: time="2025-12-16T13:12:37.479640764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:37.480798 containerd[1576]: time="2025-12-16T13:12:37.480751197Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 960.724835ms" Dec 16 13:12:37.480852 containerd[1576]: time="2025-12-16T13:12:37.480798916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:12:37.481440 containerd[1576]: time="2025-12-16T13:12:37.481268207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:12:38.899292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount909181294.mount: Deactivated successfully. Dec 16 13:12:38.900392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:12:38.901818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:39.111560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:39.122292 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:12:39.426305 containerd[1576]: time="2025-12-16T13:12:39.426241601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:39.427176 containerd[1576]: time="2025-12-16T13:12:39.427137561Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 16 13:12:39.428529 containerd[1576]: time="2025-12-16T13:12:39.428287248Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:39.430212 containerd[1576]: time="2025-12-16T13:12:39.430164589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:39.430513 containerd[1576]: time="2025-12-16T13:12:39.430481534Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.949186557s" Dec 16 13:12:39.430569 containerd[1576]: time="2025-12-16T13:12:39.430515467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:12:39.431058 containerd[1576]: time="2025-12-16T13:12:39.431032497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:12:39.446040 kubelet[2114]: E1216 13:12:39.445989 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:12:39.452195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:12:39.452601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:12:39.453010 systemd[1]: kubelet.service: Consumed 501ms CPU time, 109.6M memory peak. Dec 16 13:12:40.085053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533234742.mount: Deactivated successfully. Dec 16 13:12:41.620014 containerd[1576]: time="2025-12-16T13:12:41.619952305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:41.620792 containerd[1576]: time="2025-12-16T13:12:41.620759599Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 16 13:12:41.622108 containerd[1576]: time="2025-12-16T13:12:41.622052294Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:41.624737 containerd[1576]: time="2025-12-16T13:12:41.624694290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:41.625558 containerd[1576]: time="2025-12-16T13:12:41.625526010Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.194466371s" Dec 16 13:12:41.625599 containerd[1576]: time="2025-12-16T13:12:41.625556677Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:12:41.626093 containerd[1576]: time="2025-12-16T13:12:41.626057467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:12:42.565057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599614441.mount: Deactivated successfully. Dec 16 13:12:42.571485 containerd[1576]: time="2025-12-16T13:12:42.571449001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:42.572325 containerd[1576]: time="2025-12-16T13:12:42.572281583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 16 13:12:42.573444 containerd[1576]: time="2025-12-16T13:12:42.573397426Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:42.575338 containerd[1576]: time="2025-12-16T13:12:42.575291458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:42.575913 containerd[1576]: time="2025-12-16T13:12:42.575879251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 949.79812ms" Dec 16 13:12:42.575913 containerd[1576]: time="2025-12-16T13:12:42.575906031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:12:42.576398 containerd[1576]: time="2025-12-16T13:12:42.576372155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:12:43.487109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816415091.mount: Deactivated successfully. Dec 16 13:12:45.798059 containerd[1576]: time="2025-12-16T13:12:45.798007733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:45.798845 containerd[1576]: time="2025-12-16T13:12:45.798776966Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 16 13:12:45.799955 containerd[1576]: time="2025-12-16T13:12:45.799903409Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:45.802445 containerd[1576]: time="2025-12-16T13:12:45.802396545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:45.803290 containerd[1576]: time="2025-12-16T13:12:45.803256919Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.226858824s" Dec 16 13:12:45.803290 containerd[1576]: time="2025-12-16T13:12:45.803283920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:12:48.554598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:48.554812 systemd[1]: kubelet.service: Consumed 501ms CPU time, 109.6M memory peak. Dec 16 13:12:48.556846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:48.582303 systemd[1]: Reload requested from client PID 2265 ('systemctl') (unit session-7.scope)... Dec 16 13:12:48.582318 systemd[1]: Reloading... Dec 16 13:12:48.657848 zram_generator::config[2308]: No configuration found. Dec 16 13:12:48.881431 systemd[1]: Reloading finished in 298 ms. Dec 16 13:12:48.946449 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:12:48.946545 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:12:48.946865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:48.946905 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.2M memory peak. Dec 16 13:12:48.948308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:49.115467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:49.119471 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:12:49.156103 kubelet[2356]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:12:49.156103 kubelet[2356]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:12:49.156400 kubelet[2356]: I1216 13:12:49.156133 2356 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:12:49.636053 kubelet[2356]: I1216 13:12:49.635926 2356 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:12:49.636053 kubelet[2356]: I1216 13:12:49.635953 2356 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:12:49.636519 kubelet[2356]: I1216 13:12:49.636492 2356 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:12:49.636519 kubelet[2356]: I1216 13:12:49.636516 2356 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:12:49.636817 kubelet[2356]: I1216 13:12:49.636787 2356 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:12:50.095367 kubelet[2356]: E1216 13:12:50.095324 2356 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:12:50.095560 kubelet[2356]: I1216 13:12:50.095480 2356 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:12:50.099479 kubelet[2356]: I1216 13:12:50.099460 2356 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:12:50.104172 kubelet[2356]: I1216 13:12:50.104153 2356 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:12:50.105275 kubelet[2356]: I1216 13:12:50.105236 2356 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:12:50.105418 kubelet[2356]: I1216 13:12:50.105262 2356 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:12:50.105418 kubelet[2356]: I1216 13:12:50.105416 2356 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:12:50.105537 kubelet[2356]: I1216 13:12:50.105425 2356 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:12:50.105537 kubelet[2356]: I1216 13:12:50.105518 2356 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:12:50.108541 kubelet[2356]: I1216 13:12:50.108512 2356 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:12:50.108709 kubelet[2356]: I1216 13:12:50.108692 2356 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:12:50.108737 kubelet[2356]: I1216 13:12:50.108722 2356 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:12:50.108759 kubelet[2356]: I1216 13:12:50.108746 2356 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:12:50.108781 kubelet[2356]: I1216 13:12:50.108768 2356 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:12:50.109180 kubelet[2356]: E1216 13:12:50.109128 2356 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:12:50.109180 kubelet[2356]: E1216 13:12:50.109159 2356 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:12:50.115559 kubelet[2356]: I1216 13:12:50.115180 2356 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:12:50.116120 kubelet[2356]: I1216 13:12:50.116090 2356 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:12:50.116170 kubelet[2356]: I1216 13:12:50.116129 2356 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:12:50.116194 kubelet[2356]: W1216 13:12:50.116183 2356 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:12:50.119577 kubelet[2356]: I1216 13:12:50.119541 2356 server.go:1262] "Started kubelet" Dec 16 13:12:50.120421 kubelet[2356]: I1216 13:12:50.119915 2356 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:12:50.120421 kubelet[2356]: I1216 13:12:50.119957 2356 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:12:50.120421 kubelet[2356]: I1216 13:12:50.120179 2356 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:12:50.120421 kubelet[2356]: I1216 13:12:50.120270 2356 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:12:50.120565 kubelet[2356]: I1216 13:12:50.120520 2356 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:12:50.122839 kubelet[2356]: I1216 13:12:50.122634 2356 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:12:50.124159 kubelet[2356]: I1216 13:12:50.124021 2356 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:12:50.124427 kubelet[2356]: E1216 13:12:50.123232 2356 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b44ca11d5f17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 13:12:50.119515927 +0000 UTC m=+0.996269989,LastTimestamp:2025-12-16 13:12:50.119515927 +0000 UTC m=+0.996269989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 13:12:50.124571 kubelet[2356]: E1216 13:12:50.124544 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:50.124656 kubelet[2356]: I1216 13:12:50.124647 2356 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:12:50.124872 kubelet[2356]: I1216 13:12:50.124860 2356 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:12:50.124954 kubelet[2356]: I1216 13:12:50.124946 2356 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:12:50.125291 kubelet[2356]: E1216 13:12:50.125272 2356 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:12:50.125423 kubelet[2356]: E1216 13:12:50.125410 2356 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:12:50.125472 kubelet[2356]: I1216 13:12:50.125435 2356 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:12:50.125874 kubelet[2356]: E1216 13:12:50.125759 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Dec 16 13:12:50.126215 kubelet[2356]: I1216 13:12:50.126196 2356 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:12:50.126215 kubelet[2356]: I1216 13:12:50.126211 2356 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:12:50.137246 kubelet[2356]: I1216 13:12:50.137167 2356 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:12:50.137246 kubelet[2356]: I1216 13:12:50.137184 2356 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:12:50.137246 kubelet[2356]: I1216 13:12:50.137200 2356 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:12:50.139944 kubelet[2356]: I1216 13:12:50.139926 2356 policy_none.go:49] "None policy: Start" Dec 16 13:12:50.139944 kubelet[2356]: I1216 13:12:50.139943 2356 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:12:50.140032 kubelet[2356]: I1216 13:12:50.139955 2356 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:12:50.141166 kubelet[2356]: I1216 13:12:50.141146 2356 policy_none.go:47] "Start" Dec 16 13:12:50.143170 kubelet[2356]: I1216 13:12:50.143132 2356 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:12:50.144319 kubelet[2356]: I1216 13:12:50.144299 2356 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:12:50.144319 kubelet[2356]: I1216 13:12:50.144312 2356 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:12:50.144384 kubelet[2356]: I1216 13:12:50.144328 2356 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:12:50.144384 kubelet[2356]: E1216 13:12:50.144356 2356 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:12:50.145921 kubelet[2356]: E1216 13:12:50.145898 2356 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:12:50.149239 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:12:50.162468 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:12:50.165266 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:12:50.176619 kubelet[2356]: E1216 13:12:50.176586 2356 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:12:50.176988 kubelet[2356]: I1216 13:12:50.176780 2356 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:12:50.176988 kubelet[2356]: I1216 13:12:50.176789 2356 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:12:50.176988 kubelet[2356]: I1216 13:12:50.176917 2356 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:12:50.177697 kubelet[2356]: E1216 13:12:50.177668 2356 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:12:50.177744 kubelet[2356]: E1216 13:12:50.177700 2356 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 13:12:50.254434 systemd[1]: Created slice kubepods-burstable-pode489f8c434a76efd0d631eea40b9e87b.slice - libcontainer container kubepods-burstable-pode489f8c434a76efd0d631eea40b9e87b.slice. Dec 16 13:12:50.278247 kubelet[2356]: I1216 13:12:50.278229 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:12:50.278526 kubelet[2356]: E1216 13:12:50.278507 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Dec 16 13:12:50.281121 kubelet[2356]: E1216 13:12:50.281086 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:50.283701 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 16 13:12:50.297879 kubelet[2356]: E1216 13:12:50.297858 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:50.300619 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 16 13:12:50.302238 kubelet[2356]: E1216 13:12:50.302216 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:50.326663 kubelet[2356]: I1216 13:12:50.326415 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e489f8c434a76efd0d631eea40b9e87b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e489f8c434a76efd0d631eea40b9e87b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:50.326663 kubelet[2356]: I1216 13:12:50.326447 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e489f8c434a76efd0d631eea40b9e87b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e489f8c434a76efd0d631eea40b9e87b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:50.326663 kubelet[2356]: I1216 13:12:50.326467 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:50.326663 kubelet[2356]: I1216 13:12:50.326485 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:50.326663 kubelet[2356]: I1216 13:12:50.326520 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:50.326871 kubelet[2356]: I1216 13:12:50.326554 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e489f8c434a76efd0d631eea40b9e87b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e489f8c434a76efd0d631eea40b9e87b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:50.326871 kubelet[2356]: I1216 13:12:50.326572 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:50.326871 kubelet[2356]: I1216 13:12:50.326598 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:50.326871 kubelet[2356]: I1216 13:12:50.326616 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:12:50.326871 kubelet[2356]: E1216 13:12:50.326680 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Dec 16 13:12:50.480118 kubelet[2356]: I1216 13:12:50.480061 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:12:50.480376 kubelet[2356]: E1216 13:12:50.480338 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Dec 16 13:12:50.727888 kubelet[2356]: E1216 13:12:50.727846 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Dec 16 13:12:50.819280 containerd[1576]: time="2025-12-16T13:12:50.819192520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e489f8c434a76efd0d631eea40b9e87b,Namespace:kube-system,Attempt:0,}" Dec 16 13:12:50.821488 containerd[1576]: time="2025-12-16T13:12:50.821460194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 16 13:12:50.823687 containerd[1576]: time="2025-12-16T13:12:50.823658407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 16 13:12:50.882096 kubelet[2356]: I1216 13:12:50.882075 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:12:50.882346 kubelet[2356]: E1216 13:12:50.882319 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Dec 16 13:12:51.189273 kubelet[2356]: E1216 13:12:51.189237 2356 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:12:51.357750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37626233.mount: Deactivated successfully. Dec 16 13:12:51.362578 containerd[1576]: time="2025-12-16T13:12:51.362522662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:12:51.366066 containerd[1576]: time="2025-12-16T13:12:51.366039248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:12:51.366995 containerd[1576]: time="2025-12-16T13:12:51.366950488Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:12:51.367877 containerd[1576]: time="2025-12-16T13:12:51.367817474Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:12:51.368731 containerd[1576]: time="2025-12-16T13:12:51.368703506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:12:51.369606 containerd[1576]: time="2025-12-16T13:12:51.369566625Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:12:51.370516 containerd[1576]: time="2025-12-16T13:12:51.370485218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:12:51.372354 containerd[1576]: time="2025-12-16T13:12:51.372315110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:12:51.373525 containerd[1576]: time="2025-12-16T13:12:51.373480135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.056928ms" Dec 16 13:12:51.374170 containerd[1576]: time="2025-12-16T13:12:51.374141235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 552.682405ms" Dec 16 13:12:51.374894 containerd[1576]: time="2025-12-16T13:12:51.374872187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 550.420512ms" Dec 16 13:12:51.392090 kubelet[2356]: E1216 13:12:51.392033 2356 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:12:51.404143 containerd[1576]: time="2025-12-16T13:12:51.404095150Z" level=info msg="connecting to shim 9b1934d0764121b3e8a5f7ba2f8635fa273a0d64d1076feca0413c48f4649366" address="unix:///run/containerd/s/11001cf11f7ca5b9951c8d037c28594ee8f17e331dbe44a4b91b237e10fff8f2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:12:51.407119 containerd[1576]: time="2025-12-16T13:12:51.406546328Z" level=info msg="connecting to shim 06168d1ae05f8562c10677be9c110a59ab09d6349801952bbbaecc3903948b57" address="unix:///run/containerd/s/dc1d8eb941c3c460808f01ce59680d308f9068701f06e37fc125aeda9d2a975f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:12:51.410474 containerd[1576]: time="2025-12-16T13:12:51.410430564Z" level=info msg="connecting to shim bbb904ce63325c7fe2b5af2b2d46213b9261fb7fa725b4fc0cadfe60770692da" address="unix:///run/containerd/s/89da846640545fd2ed637cfc81c9e27d354b0eea6d6ca47ea9441bd6a25a5aa6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:12:51.431977 systemd[1]: Started cri-containerd-9b1934d0764121b3e8a5f7ba2f8635fa273a0d64d1076feca0413c48f4649366.scope - libcontainer container 9b1934d0764121b3e8a5f7ba2f8635fa273a0d64d1076feca0413c48f4649366. Dec 16 13:12:51.437998 systemd[1]: Started cri-containerd-06168d1ae05f8562c10677be9c110a59ab09d6349801952bbbaecc3903948b57.scope - libcontainer container 06168d1ae05f8562c10677be9c110a59ab09d6349801952bbbaecc3903948b57. Dec 16 13:12:51.440620 systemd[1]: Started cri-containerd-bbb904ce63325c7fe2b5af2b2d46213b9261fb7fa725b4fc0cadfe60770692da.scope - libcontainer container bbb904ce63325c7fe2b5af2b2d46213b9261fb7fa725b4fc0cadfe60770692da. Dec 16 13:12:51.491088 containerd[1576]: time="2025-12-16T13:12:51.491044863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbb904ce63325c7fe2b5af2b2d46213b9261fb7fa725b4fc0cadfe60770692da\"" Dec 16 13:12:51.495283 containerd[1576]: time="2025-12-16T13:12:51.495263206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e489f8c434a76efd0d631eea40b9e87b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b1934d0764121b3e8a5f7ba2f8635fa273a0d64d1076feca0413c48f4649366\"" Dec 16 13:12:51.496792 containerd[1576]: time="2025-12-16T13:12:51.496769762Z" level=info msg="CreateContainer within sandbox \"bbb904ce63325c7fe2b5af2b2d46213b9261fb7fa725b4fc0cadfe60770692da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:12:51.497090 containerd[1576]: time="2025-12-16T13:12:51.497070886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"06168d1ae05f8562c10677be9c110a59ab09d6349801952bbbaecc3903948b57\"" Dec 16 13:12:51.499106 containerd[1576]: time="2025-12-16T13:12:51.498874129Z" level=info msg="CreateContainer within sandbox \"9b1934d0764121b3e8a5f7ba2f8635fa273a0d64d1076feca0413c48f4649366\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:12:51.504763 containerd[1576]: time="2025-12-16T13:12:51.504744560Z" level=info msg="CreateContainer within sandbox \"06168d1ae05f8562c10677be9c110a59ab09d6349801952bbbaecc3903948b57\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:12:51.506393 containerd[1576]: time="2025-12-16T13:12:51.506372263Z" level=info msg="Container 62fc835f816658222c5399cf4c9020079a5377eba13c1fff6a2c9f0b01d42a96: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:51.516978 containerd[1576]: time="2025-12-16T13:12:51.516933964Z" level=info msg="CreateContainer within sandbox \"bbb904ce63325c7fe2b5af2b2d46213b9261fb7fa725b4fc0cadfe60770692da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"62fc835f816658222c5399cf4c9020079a5377eba13c1fff6a2c9f0b01d42a96\"" Dec 16 13:12:51.517683 containerd[1576]: time="2025-12-16T13:12:51.517650818Z" level=info msg="StartContainer for \"62fc835f816658222c5399cf4c9020079a5377eba13c1fff6a2c9f0b01d42a96\"" Dec 16 13:12:51.519517 containerd[1576]: time="2025-12-16T13:12:51.518964252Z" level=info msg="Container f75ab9b2e4388f2077ee57d5a9e32f74e235386c97ce5dc85d60b2ec3deef112: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:51.519517 containerd[1576]: time="2025-12-16T13:12:51.519321753Z" level=info msg="connecting to shim 62fc835f816658222c5399cf4c9020079a5377eba13c1fff6a2c9f0b01d42a96" address="unix:///run/containerd/s/89da846640545fd2ed637cfc81c9e27d354b0eea6d6ca47ea9441bd6a25a5aa6" protocol=ttrpc version=3 Dec 16 13:12:51.520363 containerd[1576]: time="2025-12-16T13:12:51.520343750Z" level=info msg="Container 9406ced7ce1933b2aa6c38318445678688378ba1bef25b6ebfe5bb03a752e0e4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:12:51.525050 containerd[1576]: time="2025-12-16T13:12:51.525024018Z" level=info msg="CreateContainer within sandbox \"9b1934d0764121b3e8a5f7ba2f8635fa273a0d64d1076feca0413c48f4649366\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f75ab9b2e4388f2077ee57d5a9e32f74e235386c97ce5dc85d60b2ec3deef112\"" Dec 16 13:12:51.525468 containerd[1576]: time="2025-12-16T13:12:51.525451160Z" level=info msg="StartContainer for \"f75ab9b2e4388f2077ee57d5a9e32f74e235386c97ce5dc85d60b2ec3deef112\"" Dec 16 13:12:51.526717 containerd[1576]: time="2025-12-16T13:12:51.526697558Z" level=info msg="connecting to shim f75ab9b2e4388f2077ee57d5a9e32f74e235386c97ce5dc85d60b2ec3deef112" address="unix:///run/containerd/s/11001cf11f7ca5b9951c8d037c28594ee8f17e331dbe44a4b91b237e10fff8f2" protocol=ttrpc version=3 Dec 16 13:12:51.529188 containerd[1576]: time="2025-12-16T13:12:51.529111686Z" level=info msg="CreateContainer within sandbox \"06168d1ae05f8562c10677be9c110a59ab09d6349801952bbbaecc3903948b57\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9406ced7ce1933b2aa6c38318445678688378ba1bef25b6ebfe5bb03a752e0e4\"" Dec 16 13:12:51.529614 containerd[1576]: time="2025-12-16T13:12:51.529590845Z" level=info msg="StartContainer for \"9406ced7ce1933b2aa6c38318445678688378ba1bef25b6ebfe5bb03a752e0e4\"" Dec 16 13:12:51.530868 containerd[1576]: time="2025-12-16T13:12:51.530836140Z" level=info msg="connecting to shim 9406ced7ce1933b2aa6c38318445678688378ba1bef25b6ebfe5bb03a752e0e4" address="unix:///run/containerd/s/dc1d8eb941c3c460808f01ce59680d308f9068701f06e37fc125aeda9d2a975f" protocol=ttrpc version=3 Dec 16 13:12:51.531347 kubelet[2356]: E1216 13:12:51.530931 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s" Dec 16 13:12:51.540034 systemd[1]: Started cri-containerd-62fc835f816658222c5399cf4c9020079a5377eba13c1fff6a2c9f0b01d42a96.scope - libcontainer container 62fc835f816658222c5399cf4c9020079a5377eba13c1fff6a2c9f0b01d42a96. Dec 16 13:12:51.543782 systemd[1]: Started cri-containerd-f75ab9b2e4388f2077ee57d5a9e32f74e235386c97ce5dc85d60b2ec3deef112.scope - libcontainer container f75ab9b2e4388f2077ee57d5a9e32f74e235386c97ce5dc85d60b2ec3deef112. Dec 16 13:12:51.548668 systemd[1]: Started cri-containerd-9406ced7ce1933b2aa6c38318445678688378ba1bef25b6ebfe5bb03a752e0e4.scope - libcontainer container 9406ced7ce1933b2aa6c38318445678688378ba1bef25b6ebfe5bb03a752e0e4. Dec 16 13:12:51.573025 kubelet[2356]: E1216 13:12:51.572980 2356 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:12:51.607891 containerd[1576]: time="2025-12-16T13:12:51.607852571Z" level=info msg="StartContainer for \"62fc835f816658222c5399cf4c9020079a5377eba13c1fff6a2c9f0b01d42a96\" returns successfully" Dec 16 13:12:51.617857 containerd[1576]: time="2025-12-16T13:12:51.616922834Z" level=info msg="StartContainer for \"f75ab9b2e4388f2077ee57d5a9e32f74e235386c97ce5dc85d60b2ec3deef112\" returns successfully" Dec 16 13:12:51.625649 containerd[1576]: time="2025-12-16T13:12:51.625609248Z" level=info msg="StartContainer for \"9406ced7ce1933b2aa6c38318445678688378ba1bef25b6ebfe5bb03a752e0e4\" returns successfully" Dec 16 13:12:51.684473 kubelet[2356]: I1216 13:12:51.684373 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:12:52.156850 kubelet[2356]: E1216 13:12:52.156388 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:52.157771 kubelet[2356]: E1216 13:12:52.157744 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:52.159760 kubelet[2356]: E1216 13:12:52.159726 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:52.736684 kubelet[2356]: I1216 13:12:52.736623 2356 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:12:52.736684 kubelet[2356]: E1216 13:12:52.736667 2356 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 13:12:52.743957 kubelet[2356]: E1216 13:12:52.743922 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:52.844690 kubelet[2356]: E1216 13:12:52.844639 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:52.945340 kubelet[2356]: E1216 13:12:52.945295 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.046463 kubelet[2356]: E1216 13:12:53.046342 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.146495 kubelet[2356]: E1216 13:12:53.146470 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.161731 kubelet[2356]: E1216 13:12:53.161695 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:53.161897 kubelet[2356]: E1216 13:12:53.161772 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:53.246615 kubelet[2356]: E1216 13:12:53.246570 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.347312 kubelet[2356]: E1216 13:12:53.347223 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.447892 kubelet[2356]: E1216 13:12:53.447851 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.548539 kubelet[2356]: E1216 13:12:53.548490 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.649116 kubelet[2356]: E1216 13:12:53.648989 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.749685 kubelet[2356]: E1216 13:12:53.749631 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.850695 kubelet[2356]: E1216 13:12:53.850649 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:53.950924 kubelet[2356]: E1216 13:12:53.950884 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:54.051006 kubelet[2356]: E1216 13:12:54.050966 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:54.151966 kubelet[2356]: E1216 13:12:54.151931 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:54.164128 kubelet[2356]: E1216 13:12:54.164106 2356 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:12:54.252976 kubelet[2356]: E1216 13:12:54.252884 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:54.353464 kubelet[2356]: E1216 13:12:54.353435 2356 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:12:54.425741 kubelet[2356]: I1216 13:12:54.425713 2356 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:54.441372 kubelet[2356]: I1216 13:12:54.441321 2356 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:54.445818 kubelet[2356]: I1216 13:12:54.445797 2356 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:12:54.564976 systemd[1]: Reload requested from client PID 2647 ('systemctl') (unit session-7.scope)... Dec 16 13:12:54.564992 systemd[1]: Reloading... Dec 16 13:12:54.640874 zram_generator::config[2690]: No configuration found. Dec 16 13:12:54.876756 systemd[1]: Reloading finished in 311 ms. Dec 16 13:12:54.908976 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:54.917249 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:12:54.917538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:54.917590 systemd[1]: kubelet.service: Consumed 916ms CPU time, 125.8M memory peak. Dec 16 13:12:54.919319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:55.142840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:55.157383 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:12:55.328761 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:12:55.328761 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:12:55.329141 kubelet[2735]: I1216 13:12:55.328802 2735 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:12:55.334841 kubelet[2735]: I1216 13:12:55.334795 2735 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:12:55.334895 kubelet[2735]: I1216 13:12:55.334844 2735 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:12:55.334895 kubelet[2735]: I1216 13:12:55.334875 2735 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:12:55.334895 kubelet[2735]: I1216 13:12:55.334883 2735 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:12:55.335091 kubelet[2735]: I1216 13:12:55.335071 2735 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:12:55.336254 kubelet[2735]: I1216 13:12:55.336236 2735 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:12:55.339926 kubelet[2735]: I1216 13:12:55.339896 2735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:12:55.344720 kubelet[2735]: I1216 13:12:55.344701 2735 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:12:55.350001 kubelet[2735]: I1216 13:12:55.349979 2735 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:12:55.350214 kubelet[2735]: I1216 13:12:55.350180 2735 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:12:55.350380 kubelet[2735]: I1216 13:12:55.350205 2735 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:12:55.350468 kubelet[2735]: I1216 13:12:55.350386 2735 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:12:55.350468 kubelet[2735]: I1216 13:12:55.350394 2735 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:12:55.350468 kubelet[2735]: I1216 13:12:55.350414 2735 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:12:55.351293 kubelet[2735]: I1216 13:12:55.351274 2735 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:12:55.351456 kubelet[2735]: I1216 13:12:55.351433 2735 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:12:55.351487 kubelet[2735]: I1216 13:12:55.351459 2735 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:12:55.351487 kubelet[2735]: I1216 13:12:55.351482 2735 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:12:55.351530 kubelet[2735]: I1216 13:12:55.351497 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:12:55.353190 kubelet[2735]: I1216 13:12:55.353169 2735 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:12:55.357085 kubelet[2735]: I1216 13:12:55.357069 2735 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:12:55.357153 kubelet[2735]: I1216 13:12:55.357094 2735 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:12:55.360732 kubelet[2735]: I1216 13:12:55.360704 2735 server.go:1262] "Started kubelet" Dec 16 13:12:55.361753 kubelet[2735]: I1216 13:12:55.361629 2735 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:12:55.361753 kubelet[2735]: I1216 13:12:55.361678 2735 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:12:55.361876 kubelet[2735]: I1216 13:12:55.361861 2735 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:12:55.362171 kubelet[2735]: I1216 13:12:55.362155 2735 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:12:55.362641 kubelet[2735]: I1216 13:12:55.362617 2735 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:12:55.365206 kubelet[2735]: I1216 13:12:55.365170 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:12:55.366984 kubelet[2735]: I1216 13:12:55.366962 2735 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:12:55.369837 kubelet[2735]: I1216 13:12:55.369773 2735 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:12:55.369999 kubelet[2735]: I1216 13:12:55.369987 2735 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:12:55.370225 kubelet[2735]: I1216 13:12:55.370205 2735 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:12:55.371166 kubelet[2735]: I1216 13:12:55.371151 2735 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:12:55.371340 kubelet[2735]: I1216 13:12:55.371318 2735 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:12:55.373408 kubelet[2735]: E1216 13:12:55.373379 2735 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:12:55.376257 kubelet[2735]: I1216 13:12:55.375523 2735 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:12:55.379986 kubelet[2735]: I1216 13:12:55.379952 2735 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:12:55.381234 kubelet[2735]: I1216 13:12:55.381208 2735 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:12:55.381234 kubelet[2735]: I1216 13:12:55.381227 2735 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:12:55.381382 kubelet[2735]: I1216 13:12:55.381362 2735 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:12:55.381430 kubelet[2735]: E1216 13:12:55.381411 2735 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:12:55.456937 kubelet[2735]: I1216 13:12:55.456909 2735 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:12:55.456937 kubelet[2735]: I1216 13:12:55.456931 2735 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:12:55.456937 kubelet[2735]: I1216 13:12:55.456950 2735 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:12:55.457173 kubelet[2735]: I1216 13:12:55.457131 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:12:55.457173 kubelet[2735]: I1216 13:12:55.457143 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:12:55.457228 kubelet[2735]: I1216 13:12:55.457178 2735 policy_none.go:49] "None policy: Start" Dec 16 13:12:55.457228 kubelet[2735]: I1216 13:12:55.457203 2735 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:12:55.457228 kubelet[2735]: I1216 13:12:55.457213 2735 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:12:55.457329 kubelet[2735]: I1216 13:12:55.457307 2735 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:12:55.457329 kubelet[2735]: I1216 13:12:55.457327 2735 policy_none.go:47] "Start" Dec 16 13:12:55.461649 kubelet[2735]: E1216 13:12:55.461617 2735 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:12:55.462090 kubelet[2735]: I1216 13:12:55.462073 2735 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:12:55.462134 kubelet[2735]: I1216 13:12:55.462094 2735 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:12:55.462575 kubelet[2735]: I1216 13:12:55.462551 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:12:55.464431 kubelet[2735]: E1216 13:12:55.464292 2735 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:12:55.483426 kubelet[2735]: I1216 13:12:55.483305 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:55.483764 kubelet[2735]: I1216 13:12:55.483725 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:12:55.483891 kubelet[2735]: I1216 13:12:55.483853 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:55.489658 kubelet[2735]: E1216 13:12:55.489502 2735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:55.490218 kubelet[2735]: E1216 13:12:55.489766 2735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 13:12:55.490415 kubelet[2735]: E1216 13:12:55.490396 2735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:55.565361 sudo[2772]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:12:55.565761 sudo[2772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:12:55.566250 kubelet[2735]: I1216 13:12:55.565666 2735 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:12:55.571435 kubelet[2735]: I1216 13:12:55.571380 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e489f8c434a76efd0d631eea40b9e87b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e489f8c434a76efd0d631eea40b9e87b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:55.571435 kubelet[2735]: I1216 13:12:55.571426 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e489f8c434a76efd0d631eea40b9e87b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e489f8c434a76efd0d631eea40b9e87b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:55.574376 kubelet[2735]: I1216 13:12:55.574344 2735 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 13:12:55.574452 kubelet[2735]: I1216 13:12:55.574411 2735 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:12:55.672745 kubelet[2735]: I1216 13:12:55.672375 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:55.672745 kubelet[2735]: I1216 13:12:55.672458 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:55.672745 kubelet[2735]: I1216 13:12:55.672504 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:55.672745 kubelet[2735]: I1216 13:12:55.672593 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e489f8c434a76efd0d631eea40b9e87b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e489f8c434a76efd0d631eea40b9e87b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:55.672745 kubelet[2735]: I1216 13:12:55.672646 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:55.672981 kubelet[2735]: I1216 13:12:55.672679 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:12:55.672981 kubelet[2735]: I1216 13:12:55.672709 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:12:55.942409 sudo[2772]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:56.352096 kubelet[2735]: I1216 13:12:56.351973 2735 apiserver.go:52] "Watching apiserver" Dec 16 13:12:56.370211 kubelet[2735]: I1216 13:12:56.370180 2735 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:12:56.394406 kubelet[2735]: I1216 13:12:56.394284 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.394268489 podStartE2EDuration="2.394268489s" podCreationTimestamp="2025-12-16 13:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:12:56.387221942 +0000 UTC m=+1.103813049" watchObservedRunningTime="2025-12-16 13:12:56.394268489 +0000 UTC m=+1.110859596" Dec 16 13:12:56.398359 kubelet[2735]: I1216 13:12:56.398292 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:56.402251 kubelet[2735]: I1216 13:12:56.401954 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.401931202 podStartE2EDuration="2.401931202s" podCreationTimestamp="2025-12-16 13:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:12:56.401374518 +0000 UTC m=+1.117965615" watchObservedRunningTime="2025-12-16 13:12:56.401931202 +0000 UTC m=+1.118522309" Dec 16 13:12:56.402251 kubelet[2735]: I1216 13:12:56.402126 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.40212171 podStartE2EDuration="2.40212171s" podCreationTimestamp="2025-12-16 13:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:12:56.394516004 +0000 UTC m=+1.111107111" watchObservedRunningTime="2025-12-16 13:12:56.40212171 +0000 UTC m=+1.118712807" Dec 16 13:12:56.403756 kubelet[2735]: E1216 13:12:56.403718 2735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:12:57.396112 sudo[1791]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:57.398058 sshd[1790]: Connection closed by 10.0.0.1 port 34614 Dec 16 13:12:57.398538 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:57.402732 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:34614.service: Deactivated successfully. Dec 16 13:12:57.404963 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:12:57.405176 systemd[1]: session-7.scope: Consumed 4.784s CPU time, 267.1M memory peak. Dec 16 13:12:57.406565 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:12:57.407560 systemd-logind[1561]: Removed session 7. Dec 16 13:13:00.599724 kubelet[2735]: I1216 13:13:00.599689 2735 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:13:00.600237 containerd[1576]: time="2025-12-16T13:13:00.600000713Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:13:00.600718 kubelet[2735]: I1216 13:13:00.600689 2735 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:13:01.869704 systemd[1]: Created slice kubepods-burstable-pod971bc456_1c69_4fbf_b9fd_7bdaa3821617.slice - libcontainer container kubepods-burstable-pod971bc456_1c69_4fbf_b9fd_7bdaa3821617.slice. Dec 16 13:13:01.879733 systemd[1]: Created slice kubepods-besteffort-pod291c657e_dfb7_409f_8f9f_008ee5f5f42c.slice - libcontainer container kubepods-besteffort-pod291c657e_dfb7_409f_8f9f_008ee5f5f42c.slice. Dec 16 13:13:01.892018 systemd[1]: Created slice kubepods-besteffort-pod36a48443_a964_42a7_b664_99c23de3dd2d.slice - libcontainer container kubepods-besteffort-pod36a48443_a964_42a7_b664_99c23de3dd2d.slice. Dec 16 13:13:01.914736 kubelet[2735]: I1216 13:13:01.914697 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-kernel\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.914736 kubelet[2735]: I1216 13:13:01.914731 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a48443-a964-42a7-b664-99c23de3dd2d-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-tqgmg\" (UID: \"36a48443-a964-42a7-b664-99c23de3dd2d\") " pod="kube-system/cilium-operator-6f9c7c5859-tqgmg" Dec 16 13:13:01.914736 kubelet[2735]: I1216 13:13:01.914748 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-config-path\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.914736 kubelet[2735]: I1216 13:13:01.914763 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cni-path\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915244 kubelet[2735]: I1216 13:13:01.914777 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-lib-modules\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915244 kubelet[2735]: I1216 13:13:01.914789 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-cgroup\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915244 kubelet[2735]: I1216 13:13:01.914818 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-net\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915244 kubelet[2735]: I1216 13:13:01.914955 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-etc-cni-netd\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915244 kubelet[2735]: I1216 13:13:01.915033 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/291c657e-dfb7-409f-8f9f-008ee5f5f42c-kube-proxy\") pod \"kube-proxy-z4v8d\" (UID: \"291c657e-dfb7-409f-8f9f-008ee5f5f42c\") " pod="kube-system/kube-proxy-z4v8d" Dec 16 13:13:01.915244 kubelet[2735]: I1216 13:13:01.915051 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/291c657e-dfb7-409f-8f9f-008ee5f5f42c-xtables-lock\") pod \"kube-proxy-z4v8d\" (UID: \"291c657e-dfb7-409f-8f9f-008ee5f5f42c\") " pod="kube-system/kube-proxy-z4v8d" Dec 16 13:13:01.915385 kubelet[2735]: I1216 13:13:01.915072 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzhk\" (UniqueName: \"kubernetes.io/projected/36a48443-a964-42a7-b664-99c23de3dd2d-kube-api-access-wdzhk\") pod \"cilium-operator-6f9c7c5859-tqgmg\" (UID: \"36a48443-a964-42a7-b664-99c23de3dd2d\") " pod="kube-system/cilium-operator-6f9c7c5859-tqgmg" Dec 16 13:13:01.915385 kubelet[2735]: I1216 13:13:01.915112 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/291c657e-dfb7-409f-8f9f-008ee5f5f42c-lib-modules\") pod \"kube-proxy-z4v8d\" (UID: \"291c657e-dfb7-409f-8f9f-008ee5f5f42c\") " pod="kube-system/kube-proxy-z4v8d" Dec 16 13:13:01.915385 kubelet[2735]: I1216 13:13:01.915129 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tczbb\" (UniqueName: \"kubernetes.io/projected/291c657e-dfb7-409f-8f9f-008ee5f5f42c-kube-api-access-tczbb\") pod \"kube-proxy-z4v8d\" (UID: \"291c657e-dfb7-409f-8f9f-008ee5f5f42c\") " pod="kube-system/kube-proxy-z4v8d" Dec 16 13:13:01.915385 kubelet[2735]: I1216 13:13:01.915169 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-bpf-maps\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915385 kubelet[2735]: I1216 13:13:01.915188 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hubble-tls\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915505 kubelet[2735]: I1216 13:13:01.915218 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxgms\" (UniqueName: \"kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-kube-api-access-zxgms\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915505 kubelet[2735]: I1216 13:13:01.915272 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-xtables-lock\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915505 kubelet[2735]: I1216 13:13:01.915289 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-run\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915505 kubelet[2735]: I1216 13:13:01.915302 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hostproc\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:01.915505 kubelet[2735]: I1216 13:13:01.915352 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/971bc456-1c69-4fbf-b9fd-7bdaa3821617-clustermesh-secrets\") pod \"cilium-vq6hh\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " pod="kube-system/cilium-vq6hh" Dec 16 13:13:02.190098 containerd[1576]: time="2025-12-16T13:13:02.190054247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vq6hh,Uid:971bc456-1c69-4fbf-b9fd-7bdaa3821617,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:02.190488 containerd[1576]: time="2025-12-16T13:13:02.190351136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4v8d,Uid:291c657e-dfb7-409f-8f9f-008ee5f5f42c,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:02.199391 containerd[1576]: time="2025-12-16T13:13:02.199355933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-tqgmg,Uid:36a48443-a964-42a7-b664-99c23de3dd2d,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:02.230539 containerd[1576]: time="2025-12-16T13:13:02.230444656Z" level=info msg="connecting to shim 7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090" address="unix:///run/containerd/s/f476032edd62d0fd829189af12d90deb394379c80be69394790b84153329e97a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:02.232405 containerd[1576]: time="2025-12-16T13:13:02.232361733Z" level=info msg="connecting to shim 78b568339dac9f79987cd92a2dfbce79cd8dff7d7f16739927479326609e0512" address="unix:///run/containerd/s/0eb1068d97400b67802ffc9b262c230b3df8687b7c98e9794e0018d644d12e14" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:02.238087 containerd[1576]: time="2025-12-16T13:13:02.238041765Z" level=info msg="connecting to shim a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb" address="unix:///run/containerd/s/19d5b4c2ab6c5f970c683c526418e440d6d84347e95c32e059b0989ea2d4ff29" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:02.253985 systemd[1]: Started cri-containerd-7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090.scope - libcontainer container 7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090. Dec 16 13:13:02.262801 systemd[1]: Started cri-containerd-78b568339dac9f79987cd92a2dfbce79cd8dff7d7f16739927479326609e0512.scope - libcontainer container 78b568339dac9f79987cd92a2dfbce79cd8dff7d7f16739927479326609e0512. Dec 16 13:13:02.269760 systemd[1]: Started cri-containerd-a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb.scope - libcontainer container a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb. Dec 16 13:13:02.429356 containerd[1576]: time="2025-12-16T13:13:02.429302279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vq6hh,Uid:971bc456-1c69-4fbf-b9fd-7bdaa3821617,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\"" Dec 16 13:13:02.431009 containerd[1576]: time="2025-12-16T13:13:02.430977260Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:13:02.588246 containerd[1576]: time="2025-12-16T13:13:02.588123852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4v8d,Uid:291c657e-dfb7-409f-8f9f-008ee5f5f42c,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b568339dac9f79987cd92a2dfbce79cd8dff7d7f16739927479326609e0512\"" Dec 16 13:13:02.778875 containerd[1576]: time="2025-12-16T13:13:02.778778424Z" level=info msg="CreateContainer within sandbox \"78b568339dac9f79987cd92a2dfbce79cd8dff7d7f16739927479326609e0512\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:13:02.779356 containerd[1576]: time="2025-12-16T13:13:02.779305545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-tqgmg,Uid:36a48443-a964-42a7-b664-99c23de3dd2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\"" Dec 16 13:13:02.790114 containerd[1576]: time="2025-12-16T13:13:02.790075787Z" level=info msg="Container 64848ec92f1992597704339e6d15b67e1053416fa7d257ee29255c7f962fdbf7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:02.799170 containerd[1576]: time="2025-12-16T13:13:02.799129898Z" level=info msg="CreateContainer within sandbox \"78b568339dac9f79987cd92a2dfbce79cd8dff7d7f16739927479326609e0512\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64848ec92f1992597704339e6d15b67e1053416fa7d257ee29255c7f962fdbf7\"" Dec 16 13:13:02.799718 containerd[1576]: time="2025-12-16T13:13:02.799682449Z" level=info msg="StartContainer for \"64848ec92f1992597704339e6d15b67e1053416fa7d257ee29255c7f962fdbf7\"" Dec 16 13:13:02.800964 containerd[1576]: time="2025-12-16T13:13:02.800934710Z" level=info msg="connecting to shim 64848ec92f1992597704339e6d15b67e1053416fa7d257ee29255c7f962fdbf7" address="unix:///run/containerd/s/0eb1068d97400b67802ffc9b262c230b3df8687b7c98e9794e0018d644d12e14" protocol=ttrpc version=3 Dec 16 13:13:02.825964 systemd[1]: Started cri-containerd-64848ec92f1992597704339e6d15b67e1053416fa7d257ee29255c7f962fdbf7.scope - libcontainer container 64848ec92f1992597704339e6d15b67e1053416fa7d257ee29255c7f962fdbf7. Dec 16 13:13:02.922134 containerd[1576]: time="2025-12-16T13:13:02.922102831Z" level=info msg="StartContainer for \"64848ec92f1992597704339e6d15b67e1053416fa7d257ee29255c7f962fdbf7\" returns successfully" Dec 16 13:13:03.422651 kubelet[2735]: I1216 13:13:03.422575 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z4v8d" podStartSLOduration=2.422552459 podStartE2EDuration="2.422552459s" podCreationTimestamp="2025-12-16 13:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:03.422244449 +0000 UTC m=+8.138835556" watchObservedRunningTime="2025-12-16 13:13:03.422552459 +0000 UTC m=+8.139143566" Dec 16 13:13:09.243157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312477890.mount: Deactivated successfully. Dec 16 13:13:11.031447 containerd[1576]: time="2025-12-16T13:13:11.031392189Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:11.032197 containerd[1576]: time="2025-12-16T13:13:11.032158675Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:13:11.033213 containerd[1576]: time="2025-12-16T13:13:11.033164013Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:11.034583 containerd[1576]: time="2025-12-16T13:13:11.034551319Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.603534933s" Dec 16 13:13:11.034646 containerd[1576]: time="2025-12-16T13:13:11.034583810Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:13:11.035494 containerd[1576]: time="2025-12-16T13:13:11.035453842Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:13:11.039376 containerd[1576]: time="2025-12-16T13:13:11.039347326Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:13:11.054412 containerd[1576]: time="2025-12-16T13:13:11.054375396Z" level=info msg="Container 442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:11.058459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2159618198.mount: Deactivated successfully. Dec 16 13:13:11.063006 containerd[1576]: time="2025-12-16T13:13:11.062979216Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\"" Dec 16 13:13:11.063352 containerd[1576]: time="2025-12-16T13:13:11.063319272Z" level=info msg="StartContainer for \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\"" Dec 16 13:13:11.064099 containerd[1576]: time="2025-12-16T13:13:11.064076280Z" level=info msg="connecting to shim 442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a" address="unix:///run/containerd/s/f476032edd62d0fd829189af12d90deb394379c80be69394790b84153329e97a" protocol=ttrpc version=3 Dec 16 13:13:11.082971 systemd[1]: Started cri-containerd-442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a.scope - libcontainer container 442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a. Dec 16 13:13:11.113570 containerd[1576]: time="2025-12-16T13:13:11.113527622Z" level=info msg="StartContainer for \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\" returns successfully" Dec 16 13:13:11.126392 systemd[1]: cri-containerd-442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a.scope: Deactivated successfully. Dec 16 13:13:11.128636 containerd[1576]: time="2025-12-16T13:13:11.128604935Z" level=info msg="received container exit event container_id:\"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\" id:\"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\" pid:3160 exited_at:{seconds:1765890791 nanos:128193494}" Dec 16 13:13:11.147113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a-rootfs.mount: Deactivated successfully. Dec 16 13:13:11.462270 update_engine[1564]: I20251216 13:13:11.462198 1564 update_attempter.cc:509] Updating boot flags... Dec 16 13:13:12.444690 containerd[1576]: time="2025-12-16T13:13:12.444632163Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:13:12.453138 containerd[1576]: time="2025-12-16T13:13:12.453097848Z" level=info msg="Container ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:12.458812 containerd[1576]: time="2025-12-16T13:13:12.458775703Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\"" Dec 16 13:13:12.459199 containerd[1576]: time="2025-12-16T13:13:12.459174309Z" level=info msg="StartContainer for \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\"" Dec 16 13:13:12.459893 containerd[1576]: time="2025-12-16T13:13:12.459872845Z" level=info msg="connecting to shim ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2" address="unix:///run/containerd/s/f476032edd62d0fd829189af12d90deb394379c80be69394790b84153329e97a" protocol=ttrpc version=3 Dec 16 13:13:12.478945 systemd[1]: Started cri-containerd-ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2.scope - libcontainer container ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2. Dec 16 13:13:12.596535 containerd[1576]: time="2025-12-16T13:13:12.596490254Z" level=info msg="StartContainer for \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\" returns successfully" Dec 16 13:13:12.638931 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:13:12.639304 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:12.639604 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:12.641151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:12.642960 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:13:12.643377 systemd[1]: cri-containerd-ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2.scope: Deactivated successfully. Dec 16 13:13:12.645389 containerd[1576]: time="2025-12-16T13:13:12.645361034Z" level=info msg="received container exit event container_id:\"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\" id:\"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\" pid:3223 exited_at:{seconds:1765890792 nanos:644720559}" Dec 16 13:13:12.664216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:13.457394 containerd[1576]: time="2025-12-16T13:13:13.452144989Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:13:13.455009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2-rootfs.mount: Deactivated successfully. Dec 16 13:13:13.495252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159386064.mount: Deactivated successfully. Dec 16 13:13:13.500858 containerd[1576]: time="2025-12-16T13:13:13.499968474Z" level=info msg="Container 2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:13.503731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614231503.mount: Deactivated successfully. Dec 16 13:13:13.511727 containerd[1576]: time="2025-12-16T13:13:13.511684221Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\"" Dec 16 13:13:13.513404 containerd[1576]: time="2025-12-16T13:13:13.513377663Z" level=info msg="StartContainer for \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\"" Dec 16 13:13:13.514731 containerd[1576]: time="2025-12-16T13:13:13.514710239Z" level=info msg="connecting to shim 2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f" address="unix:///run/containerd/s/f476032edd62d0fd829189af12d90deb394379c80be69394790b84153329e97a" protocol=ttrpc version=3 Dec 16 13:13:13.537952 systemd[1]: Started cri-containerd-2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f.scope - libcontainer container 2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f. Dec 16 13:13:13.618686 systemd[1]: cri-containerd-2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f.scope: Deactivated successfully. Dec 16 13:13:13.714986 containerd[1576]: time="2025-12-16T13:13:13.714578943Z" level=info msg="received container exit event container_id:\"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\" id:\"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\" pid:3282 exited_at:{seconds:1765890793 nanos:620298002}" Dec 16 13:13:13.723658 containerd[1576]: time="2025-12-16T13:13:13.723619206Z" level=info msg="StartContainer for \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\" returns successfully" Dec 16 13:13:13.904964 containerd[1576]: time="2025-12-16T13:13:13.904902929Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:13.907575 containerd[1576]: time="2025-12-16T13:13:13.907549367Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:13:13.909002 containerd[1576]: time="2025-12-16T13:13:13.908969660Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:13.910077 containerd[1576]: time="2025-12-16T13:13:13.910044649Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.874552683s" Dec 16 13:13:13.910077 containerd[1576]: time="2025-12-16T13:13:13.910072101Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:13:13.914267 containerd[1576]: time="2025-12-16T13:13:13.914238421Z" level=info msg="CreateContainer within sandbox \"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:13:13.922242 containerd[1576]: time="2025-12-16T13:13:13.922190431Z" level=info msg="Container c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:13.928671 containerd[1576]: time="2025-12-16T13:13:13.928637726Z" level=info msg="CreateContainer within sandbox \"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\"" Dec 16 13:13:13.929080 containerd[1576]: time="2025-12-16T13:13:13.929011254Z" level=info msg="StartContainer for \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\"" Dec 16 13:13:13.929744 containerd[1576]: time="2025-12-16T13:13:13.929708096Z" level=info msg="connecting to shim c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808" address="unix:///run/containerd/s/19d5b4c2ab6c5f970c683c526418e440d6d84347e95c32e059b0989ea2d4ff29" protocol=ttrpc version=3 Dec 16 13:13:13.953073 systemd[1]: Started cri-containerd-c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808.scope - libcontainer container c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808. Dec 16 13:13:13.983226 containerd[1576]: time="2025-12-16T13:13:13.982966668Z" level=info msg="StartContainer for \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" returns successfully" Dec 16 13:13:14.537688 containerd[1576]: time="2025-12-16T13:13:14.537633405Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:13:14.554619 containerd[1576]: time="2025-12-16T13:13:14.554574128Z" level=info msg="Container f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:14.560050 kubelet[2735]: I1216 13:13:14.556890 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-tqgmg" podStartSLOduration=2.42704921 podStartE2EDuration="13.556873525s" podCreationTimestamp="2025-12-16 13:13:01 +0000 UTC" firstStartedPulling="2025-12-16 13:13:02.780911134 +0000 UTC m=+7.497502241" lastFinishedPulling="2025-12-16 13:13:13.910735449 +0000 UTC m=+18.627326556" observedRunningTime="2025-12-16 13:13:14.556160032 +0000 UTC m=+19.272751140" watchObservedRunningTime="2025-12-16 13:13:14.556873525 +0000 UTC m=+19.273464622" Dec 16 13:13:14.565799 containerd[1576]: time="2025-12-16T13:13:14.565759304Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\"" Dec 16 13:13:14.569041 containerd[1576]: time="2025-12-16T13:13:14.569001477Z" level=info msg="StartContainer for \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\"" Dec 16 13:13:14.571014 containerd[1576]: time="2025-12-16T13:13:14.570983634Z" level=info msg="connecting to shim f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30" address="unix:///run/containerd/s/f476032edd62d0fd829189af12d90deb394379c80be69394790b84153329e97a" protocol=ttrpc version=3 Dec 16 13:13:14.601958 systemd[1]: Started cri-containerd-f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30.scope - libcontainer container f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30. Dec 16 13:13:14.659916 containerd[1576]: time="2025-12-16T13:13:14.659872006Z" level=info msg="StartContainer for \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\" returns successfully" Dec 16 13:13:14.663430 systemd[1]: cri-containerd-f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30.scope: Deactivated successfully. Dec 16 13:13:14.663865 containerd[1576]: time="2025-12-16T13:13:14.663746387Z" level=info msg="received container exit event container_id:\"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\" id:\"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\" pid:3360 exited_at:{seconds:1765890794 nanos:663521751}" Dec 16 13:13:15.454986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30-rootfs.mount: Deactivated successfully. Dec 16 13:13:15.482696 containerd[1576]: time="2025-12-16T13:13:15.482647681Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:13:15.586023 containerd[1576]: time="2025-12-16T13:13:15.585975291Z" level=info msg="Container 5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:15.589572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61227954.mount: Deactivated successfully. Dec 16 13:13:15.594266 containerd[1576]: time="2025-12-16T13:13:15.594211698Z" level=info msg="CreateContainer within sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\"" Dec 16 13:13:15.594893 containerd[1576]: time="2025-12-16T13:13:15.594868032Z" level=info msg="StartContainer for \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\"" Dec 16 13:13:15.595724 containerd[1576]: time="2025-12-16T13:13:15.595697873Z" level=info msg="connecting to shim 5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba" address="unix:///run/containerd/s/f476032edd62d0fd829189af12d90deb394379c80be69394790b84153329e97a" protocol=ttrpc version=3 Dec 16 13:13:15.616082 systemd[1]: Started cri-containerd-5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba.scope - libcontainer container 5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba. Dec 16 13:13:15.675033 containerd[1576]: time="2025-12-16T13:13:15.674983172Z" level=info msg="StartContainer for \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" returns successfully" Dec 16 13:13:15.839965 kubelet[2735]: I1216 13:13:15.839519 2735 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:13:15.885974 systemd[1]: Created slice kubepods-burstable-pod523a11aa_b329_4a6b_a601_cfc75ff081df.slice - libcontainer container kubepods-burstable-pod523a11aa_b329_4a6b_a601_cfc75ff081df.slice. Dec 16 13:13:15.891941 systemd[1]: Created slice kubepods-burstable-podc6ef4f46_4b3a_480e_8695_4751cdce70e0.slice - libcontainer container kubepods-burstable-podc6ef4f46_4b3a_480e_8695_4751cdce70e0.slice. Dec 16 13:13:15.977573 kubelet[2735]: I1216 13:13:15.977505 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/523a11aa-b329-4a6b-a601-cfc75ff081df-config-volume\") pod \"coredns-66bc5c9577-m9wv9\" (UID: \"523a11aa-b329-4a6b-a601-cfc75ff081df\") " pod="kube-system/coredns-66bc5c9577-m9wv9" Dec 16 13:13:15.977573 kubelet[2735]: I1216 13:13:15.977556 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clr8n\" (UniqueName: \"kubernetes.io/projected/523a11aa-b329-4a6b-a601-cfc75ff081df-kube-api-access-clr8n\") pod \"coredns-66bc5c9577-m9wv9\" (UID: \"523a11aa-b329-4a6b-a601-cfc75ff081df\") " pod="kube-system/coredns-66bc5c9577-m9wv9" Dec 16 13:13:15.977573 kubelet[2735]: I1216 13:13:15.977577 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpbp\" (UniqueName: \"kubernetes.io/projected/c6ef4f46-4b3a-480e-8695-4751cdce70e0-kube-api-access-nhpbp\") pod \"coredns-66bc5c9577-97t8t\" (UID: \"c6ef4f46-4b3a-480e-8695-4751cdce70e0\") " pod="kube-system/coredns-66bc5c9577-97t8t" Dec 16 13:13:15.977573 kubelet[2735]: I1216 13:13:15.977591 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ef4f46-4b3a-480e-8695-4751cdce70e0-config-volume\") pod \"coredns-66bc5c9577-97t8t\" (UID: \"c6ef4f46-4b3a-480e-8695-4751cdce70e0\") " pod="kube-system/coredns-66bc5c9577-97t8t" Dec 16 13:13:16.192673 containerd[1576]: time="2025-12-16T13:13:16.192624714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m9wv9,Uid:523a11aa-b329-4a6b-a601-cfc75ff081df,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:16.198990 containerd[1576]: time="2025-12-16T13:13:16.198943583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-97t8t,Uid:c6ef4f46-4b3a-480e-8695-4751cdce70e0,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:16.514936 kubelet[2735]: I1216 13:13:16.514532 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vq6hh" podStartSLOduration=6.909810468 podStartE2EDuration="15.514514478s" podCreationTimestamp="2025-12-16 13:13:01 +0000 UTC" firstStartedPulling="2025-12-16 13:13:02.430640855 +0000 UTC m=+7.147231962" lastFinishedPulling="2025-12-16 13:13:11.035344865 +0000 UTC m=+15.751935972" observedRunningTime="2025-12-16 13:13:16.513985066 +0000 UTC m=+21.230576173" watchObservedRunningTime="2025-12-16 13:13:16.514514478 +0000 UTC m=+21.231105585" Dec 16 13:13:17.852792 systemd-networkd[1483]: cilium_host: Link UP Dec 16 13:13:17.853539 systemd-networkd[1483]: cilium_net: Link UP Dec 16 13:13:17.854291 systemd-networkd[1483]: cilium_net: Gained carrier Dec 16 13:13:17.854607 systemd-networkd[1483]: cilium_host: Gained carrier Dec 16 13:13:17.864292 systemd-networkd[1483]: cilium_host: Gained IPv6LL Dec 16 13:13:17.946748 systemd-networkd[1483]: cilium_vxlan: Link UP Dec 16 13:13:17.946760 systemd-networkd[1483]: cilium_vxlan: Gained carrier Dec 16 13:13:17.988019 systemd-networkd[1483]: cilium_net: Gained IPv6LL Dec 16 13:13:18.153864 kernel: NET: Registered PF_ALG protocol family Dec 16 13:13:18.756867 systemd-networkd[1483]: lxc_health: Link UP Dec 16 13:13:18.771893 systemd-networkd[1483]: lxc_health: Gained carrier Dec 16 13:13:19.240922 systemd-networkd[1483]: lxca0090ba49819: Link UP Dec 16 13:13:19.241932 kernel: eth0: renamed from tmp2c4e5 Dec 16 13:13:19.242908 systemd-networkd[1483]: lxca0090ba49819: Gained carrier Dec 16 13:13:19.257462 systemd-networkd[1483]: lxc0b5ac5bb8f64: Link UP Dec 16 13:13:19.266209 kernel: eth0: renamed from tmp90616 Dec 16 13:13:19.267662 systemd-networkd[1483]: lxc0b5ac5bb8f64: Gained carrier Dec 16 13:13:19.636030 systemd-networkd[1483]: cilium_vxlan: Gained IPv6LL Dec 16 13:13:19.955977 systemd-networkd[1483]: lxc_health: Gained IPv6LL Dec 16 13:13:20.724043 systemd-networkd[1483]: lxca0090ba49819: Gained IPv6LL Dec 16 13:13:20.724412 systemd-networkd[1483]: lxc0b5ac5bb8f64: Gained IPv6LL Dec 16 13:13:23.578185 containerd[1576]: time="2025-12-16T13:13:23.578135231Z" level=info msg="connecting to shim 2c4e514b99097c15083a6deddb8b65b0ea64b64b6f766f31d23ae6912c904e4a" address="unix:///run/containerd/s/c1e926215c40252b39a9cedb0f0b4c16f37a940aa5f428ce9df6749faf9b6847" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:23.578185 containerd[1576]: time="2025-12-16T13:13:23.578171149Z" level=info msg="connecting to shim 90616378a2064b1b35ca7a490227e374e224bc091a029e86dc54bc0d90d05413" address="unix:///run/containerd/s/d91eb297accbdb137efdd34dd4e281b9c6cf644d39848a0fae23bd45a7eec642" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:23.610972 systemd[1]: Started cri-containerd-2c4e514b99097c15083a6deddb8b65b0ea64b64b6f766f31d23ae6912c904e4a.scope - libcontainer container 2c4e514b99097c15083a6deddb8b65b0ea64b64b6f766f31d23ae6912c904e4a. Dec 16 13:13:23.612864 systemd[1]: Started cri-containerd-90616378a2064b1b35ca7a490227e374e224bc091a029e86dc54bc0d90d05413.scope - libcontainer container 90616378a2064b1b35ca7a490227e374e224bc091a029e86dc54bc0d90d05413. Dec 16 13:13:23.626137 systemd-resolved[1396]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:13:23.628108 systemd-resolved[1396]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:13:23.661133 containerd[1576]: time="2025-12-16T13:13:23.661077581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m9wv9,Uid:523a11aa-b329-4a6b-a601-cfc75ff081df,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c4e514b99097c15083a6deddb8b65b0ea64b64b6f766f31d23ae6912c904e4a\"" Dec 16 13:13:23.669199 containerd[1576]: time="2025-12-16T13:13:23.668307971Z" level=info msg="CreateContainer within sandbox \"2c4e514b99097c15083a6deddb8b65b0ea64b64b6f766f31d23ae6912c904e4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:13:23.685119 containerd[1576]: time="2025-12-16T13:13:23.685072637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-97t8t,Uid:c6ef4f46-4b3a-480e-8695-4751cdce70e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"90616378a2064b1b35ca7a490227e374e224bc091a029e86dc54bc0d90d05413\"" Dec 16 13:13:23.690791 containerd[1576]: time="2025-12-16T13:13:23.690752974Z" level=info msg="CreateContainer within sandbox \"90616378a2064b1b35ca7a490227e374e224bc091a029e86dc54bc0d90d05413\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:13:23.693875 containerd[1576]: time="2025-12-16T13:13:23.693819499Z" level=info msg="Container d7bdd335863f2e4835d214141f86eee55f94e2abcb16fdd0bef6f70937dcb2ae: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:23.702636 containerd[1576]: time="2025-12-16T13:13:23.702611845Z" level=info msg="CreateContainer within sandbox \"2c4e514b99097c15083a6deddb8b65b0ea64b64b6f766f31d23ae6912c904e4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7bdd335863f2e4835d214141f86eee55f94e2abcb16fdd0bef6f70937dcb2ae\"" Dec 16 13:13:23.703167 containerd[1576]: time="2025-12-16T13:13:23.703114042Z" level=info msg="StartContainer for \"d7bdd335863f2e4835d214141f86eee55f94e2abcb16fdd0bef6f70937dcb2ae\"" Dec 16 13:13:23.704216 containerd[1576]: time="2025-12-16T13:13:23.703920985Z" level=info msg="connecting to shim d7bdd335863f2e4835d214141f86eee55f94e2abcb16fdd0bef6f70937dcb2ae" address="unix:///run/containerd/s/c1e926215c40252b39a9cedb0f0b4c16f37a940aa5f428ce9df6749faf9b6847" protocol=ttrpc version=3 Dec 16 13:13:23.704353 containerd[1576]: time="2025-12-16T13:13:23.704327782Z" level=info msg="Container 191e4498ea8ee8d1c405a3589d3be3c002971d6015c991ac711410b4ee7391c6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:23.712867 containerd[1576]: time="2025-12-16T13:13:23.712838457Z" level=info msg="CreateContainer within sandbox \"90616378a2064b1b35ca7a490227e374e224bc091a029e86dc54bc0d90d05413\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"191e4498ea8ee8d1c405a3589d3be3c002971d6015c991ac711410b4ee7391c6\"" Dec 16 13:13:23.714850 containerd[1576]: time="2025-12-16T13:13:23.714656998Z" level=info msg="StartContainer for \"191e4498ea8ee8d1c405a3589d3be3c002971d6015c991ac711410b4ee7391c6\"" Dec 16 13:13:23.715979 containerd[1576]: time="2025-12-16T13:13:23.715953194Z" level=info msg="connecting to shim 191e4498ea8ee8d1c405a3589d3be3c002971d6015c991ac711410b4ee7391c6" address="unix:///run/containerd/s/d91eb297accbdb137efdd34dd4e281b9c6cf644d39848a0fae23bd45a7eec642" protocol=ttrpc version=3 Dec 16 13:13:23.727047 systemd[1]: Started cri-containerd-d7bdd335863f2e4835d214141f86eee55f94e2abcb16fdd0bef6f70937dcb2ae.scope - libcontainer container d7bdd335863f2e4835d214141f86eee55f94e2abcb16fdd0bef6f70937dcb2ae. Dec 16 13:13:23.731344 systemd[1]: Started cri-containerd-191e4498ea8ee8d1c405a3589d3be3c002971d6015c991ac711410b4ee7391c6.scope - libcontainer container 191e4498ea8ee8d1c405a3589d3be3c002971d6015c991ac711410b4ee7391c6. Dec 16 13:13:24.018097 containerd[1576]: time="2025-12-16T13:13:24.017658477Z" level=info msg="StartContainer for \"d7bdd335863f2e4835d214141f86eee55f94e2abcb16fdd0bef6f70937dcb2ae\" returns successfully" Dec 16 13:13:24.020702 containerd[1576]: time="2025-12-16T13:13:24.020677590Z" level=info msg="StartContainer for \"191e4498ea8ee8d1c405a3589d3be3c002971d6015c991ac711410b4ee7391c6\" returns successfully" Dec 16 13:13:24.790567 kubelet[2735]: I1216 13:13:24.790494 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m9wv9" podStartSLOduration=23.7904749 podStartE2EDuration="23.7904749s" podCreationTimestamp="2025-12-16 13:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:24.78998688 +0000 UTC m=+29.506577987" watchObservedRunningTime="2025-12-16 13:13:24.7904749 +0000 UTC m=+29.507066008" Dec 16 13:13:24.791055 kubelet[2735]: I1216 13:13:24.790581 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-97t8t" podStartSLOduration=23.790577484 podStartE2EDuration="23.790577484s" podCreationTimestamp="2025-12-16 13:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:24.781061478 +0000 UTC m=+29.497652585" watchObservedRunningTime="2025-12-16 13:13:24.790577484 +0000 UTC m=+29.507168591" Dec 16 13:13:25.490848 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:55188.service - OpenSSH per-connection server daemon (10.0.0.1:55188). Dec 16 13:13:25.564027 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 55188 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:25.565675 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:25.570352 systemd-logind[1561]: New session 8 of user core. Dec 16 13:13:25.579972 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:13:25.703690 sshd[4077]: Connection closed by 10.0.0.1 port 55188 Dec 16 13:13:25.704019 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:25.708536 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:55188.service: Deactivated successfully. Dec 16 13:13:25.710530 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:13:25.711356 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:13:25.712450 systemd-logind[1561]: Removed session 8. Dec 16 13:13:26.073898 kubelet[2735]: I1216 13:13:26.073842 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:13:30.716479 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:55202.service - OpenSSH per-connection server daemon (10.0.0.1:55202). Dec 16 13:13:30.766460 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 55202 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:30.767691 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:30.771561 systemd-logind[1561]: New session 9 of user core. Dec 16 13:13:30.777931 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:13:30.955119 sshd[4100]: Connection closed by 10.0.0.1 port 55202 Dec 16 13:13:30.955483 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:30.959193 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:55202.service: Deactivated successfully. Dec 16 13:13:30.961216 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:13:30.962835 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:13:30.964095 systemd-logind[1561]: Removed session 9. Dec 16 13:13:35.971413 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:46490.service - OpenSSH per-connection server daemon (10.0.0.1:46490). Dec 16 13:13:36.018893 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 46490 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:36.020547 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:36.024570 systemd-logind[1561]: New session 10 of user core. Dec 16 13:13:36.033960 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:13:36.149026 sshd[4119]: Connection closed by 10.0.0.1 port 46490 Dec 16 13:13:36.149370 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:36.153656 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:46490.service: Deactivated successfully. Dec 16 13:13:36.155597 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:13:36.156474 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:13:36.157535 systemd-logind[1561]: Removed session 10. Dec 16 13:13:41.165104 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:46492.service - OpenSSH per-connection server daemon (10.0.0.1:46492). Dec 16 13:13:41.206626 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 46492 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:41.207767 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:41.211771 systemd-logind[1561]: New session 11 of user core. Dec 16 13:13:41.221973 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:13:41.330785 sshd[4136]: Connection closed by 10.0.0.1 port 46492 Dec 16 13:13:41.331150 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:41.335711 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:46492.service: Deactivated successfully. Dec 16 13:13:41.337845 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:13:41.338615 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:13:41.339875 systemd-logind[1561]: Removed session 11. Dec 16 13:13:46.342798 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:50512.service - OpenSSH per-connection server daemon (10.0.0.1:50512). Dec 16 13:13:46.400462 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 50512 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:46.401941 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:46.406208 systemd-logind[1561]: New session 12 of user core. Dec 16 13:13:46.415983 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:13:46.524045 sshd[4153]: Connection closed by 10.0.0.1 port 50512 Dec 16 13:13:46.524400 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:46.536403 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:50512.service: Deactivated successfully. Dec 16 13:13:46.538269 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:13:46.538974 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:13:46.541370 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:50524.service - OpenSSH per-connection server daemon (10.0.0.1:50524). Dec 16 13:13:46.542050 systemd-logind[1561]: Removed session 12. Dec 16 13:13:46.595766 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 50524 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:46.596975 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:46.601124 systemd-logind[1561]: New session 13 of user core. Dec 16 13:13:46.610960 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:13:46.763257 sshd[4170]: Connection closed by 10.0.0.1 port 50524 Dec 16 13:13:46.764270 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:46.777201 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:50524.service: Deactivated successfully. Dec 16 13:13:46.780044 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:13:46.782098 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:13:46.785391 systemd-logind[1561]: Removed session 13. Dec 16 13:13:46.786979 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:50540.service - OpenSSH per-connection server daemon (10.0.0.1:50540). Dec 16 13:13:46.854083 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 50540 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:46.855927 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:46.860738 systemd-logind[1561]: New session 14 of user core. Dec 16 13:13:46.870066 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:13:46.982611 sshd[4185]: Connection closed by 10.0.0.1 port 50540 Dec 16 13:13:46.982988 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:46.987451 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:50540.service: Deactivated successfully. Dec 16 13:13:46.989437 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:13:46.990431 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:13:46.991587 systemd-logind[1561]: Removed session 14. Dec 16 13:13:51.994515 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:50552.service - OpenSSH per-connection server daemon (10.0.0.1:50552). Dec 16 13:13:52.046152 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 50552 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:52.047714 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:52.051688 systemd-logind[1561]: New session 15 of user core. Dec 16 13:13:52.064953 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:13:52.176675 sshd[4202]: Connection closed by 10.0.0.1 port 50552 Dec 16 13:13:52.177049 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:52.181150 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:50552.service: Deactivated successfully. Dec 16 13:13:52.183151 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:13:52.185070 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:13:52.186592 systemd-logind[1561]: Removed session 15. Dec 16 13:13:57.188488 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:33222.service - OpenSSH per-connection server daemon (10.0.0.1:33222). Dec 16 13:13:57.235434 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 33222 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:57.236607 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:57.240573 systemd-logind[1561]: New session 16 of user core. Dec 16 13:13:57.248931 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:13:57.358788 sshd[4220]: Connection closed by 10.0.0.1 port 33222 Dec 16 13:13:57.359145 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:57.371302 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:33222.service: Deactivated successfully. Dec 16 13:13:57.373411 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:13:57.374207 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:13:57.376807 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:33226.service - OpenSSH per-connection server daemon (10.0.0.1:33226). Dec 16 13:13:57.377618 systemd-logind[1561]: Removed session 16. Dec 16 13:13:57.424975 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 33226 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:57.426670 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:57.431000 systemd-logind[1561]: New session 17 of user core. Dec 16 13:13:57.444961 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:13:57.615020 sshd[4236]: Connection closed by 10.0.0.1 port 33226 Dec 16 13:13:57.615350 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:57.623488 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:33226.service: Deactivated successfully. Dec 16 13:13:57.625392 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:13:57.626113 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:13:57.629123 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:33230.service - OpenSSH per-connection server daemon (10.0.0.1:33230). Dec 16 13:13:57.629741 systemd-logind[1561]: Removed session 17. Dec 16 13:13:57.687419 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 33230 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:57.688691 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:57.692926 systemd-logind[1561]: New session 18 of user core. Dec 16 13:13:57.702944 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:13:58.272896 sshd[4251]: Connection closed by 10.0.0.1 port 33230 Dec 16 13:13:58.273362 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:58.281974 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:33230.service: Deactivated successfully. Dec 16 13:13:58.284896 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:13:58.285714 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:13:58.289214 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:33244.service - OpenSSH per-connection server daemon (10.0.0.1:33244). Dec 16 13:13:58.290299 systemd-logind[1561]: Removed session 18. Dec 16 13:13:58.337149 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 33244 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:58.338998 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:58.343165 systemd-logind[1561]: New session 19 of user core. Dec 16 13:13:58.352940 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:13:58.573685 sshd[4270]: Connection closed by 10.0.0.1 port 33244 Dec 16 13:13:58.575238 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:58.584590 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:33244.service: Deactivated successfully. Dec 16 13:13:58.586518 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:13:58.587341 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:13:58.590030 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:33254.service - OpenSSH per-connection server daemon (10.0.0.1:33254). Dec 16 13:13:58.591519 systemd-logind[1561]: Removed session 19. Dec 16 13:13:58.651937 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 33254 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:13:58.653270 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:58.658416 systemd-logind[1561]: New session 20 of user core. Dec 16 13:13:58.667969 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:13:58.777220 sshd[4285]: Connection closed by 10.0.0.1 port 33254 Dec 16 13:13:58.777554 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:58.782221 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:33254.service: Deactivated successfully. Dec 16 13:13:58.784357 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:13:58.785174 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:13:58.786367 systemd-logind[1561]: Removed session 20. Dec 16 13:14:03.801524 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:56170.service - OpenSSH per-connection server daemon (10.0.0.1:56170). Dec 16 13:14:03.857283 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 56170 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:14:03.858440 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:03.862454 systemd-logind[1561]: New session 21 of user core. Dec 16 13:14:03.871945 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:14:03.976962 sshd[4307]: Connection closed by 10.0.0.1 port 56170 Dec 16 13:14:03.977287 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:03.981903 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:56170.service: Deactivated successfully. Dec 16 13:14:03.983881 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:14:03.984731 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:14:03.986208 systemd-logind[1561]: Removed session 21. Dec 16 13:14:08.993527 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:56182.service - OpenSSH per-connection server daemon (10.0.0.1:56182). Dec 16 13:14:09.044385 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 56182 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:14:09.045562 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:09.049692 systemd-logind[1561]: New session 22 of user core. Dec 16 13:14:09.059956 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:14:09.169472 sshd[4324]: Connection closed by 10.0.0.1 port 56182 Dec 16 13:14:09.169852 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:09.174007 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:56182.service: Deactivated successfully. Dec 16 13:14:09.176084 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:14:09.176934 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:14:09.178373 systemd-logind[1561]: Removed session 22. Dec 16 13:14:14.186351 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:43320.service - OpenSSH per-connection server daemon (10.0.0.1:43320). Dec 16 13:14:14.242029 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 43320 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:14:14.243368 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:14.247366 systemd-logind[1561]: New session 23 of user core. Dec 16 13:14:14.260963 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:14:14.365503 sshd[4341]: Connection closed by 10.0.0.1 port 43320 Dec 16 13:14:14.365861 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:14.374508 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:43320.service: Deactivated successfully. Dec 16 13:14:14.376385 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:14:14.377187 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:14:14.379560 systemd[1]: Started sshd@23-10.0.0.147:22-10.0.0.1:43322.service - OpenSSH per-connection server daemon (10.0.0.1:43322). Dec 16 13:14:14.380508 systemd-logind[1561]: Removed session 23. Dec 16 13:14:14.438227 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 43322 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:14:14.439548 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:14.443592 systemd-logind[1561]: New session 24 of user core. Dec 16 13:14:14.452982 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:14:15.776103 containerd[1576]: time="2025-12-16T13:14:15.776058418Z" level=info msg="StopContainer for \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" with timeout 30 (s)" Dec 16 13:14:15.788649 containerd[1576]: time="2025-12-16T13:14:15.788513972Z" level=info msg="Stop container \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" with signal terminated" Dec 16 13:14:15.805318 systemd[1]: cri-containerd-c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808.scope: Deactivated successfully. Dec 16 13:14:15.806574 containerd[1576]: time="2025-12-16T13:14:15.806535204Z" level=info msg="received container exit event container_id:\"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" id:\"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" pid:3326 exited_at:{seconds:1765890855 nanos:806256639}" Dec 16 13:14:15.815722 containerd[1576]: time="2025-12-16T13:14:15.815684733Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:14:15.819698 containerd[1576]: time="2025-12-16T13:14:15.819577175Z" level=info msg="StopContainer for \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" with timeout 2 (s)" Dec 16 13:14:15.819853 containerd[1576]: time="2025-12-16T13:14:15.819803210Z" level=info msg="Stop container \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" with signal terminated" Dec 16 13:14:15.827301 systemd-networkd[1483]: lxc_health: Link DOWN Dec 16 13:14:15.827307 systemd-networkd[1483]: lxc_health: Lost carrier Dec 16 13:14:15.837253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808-rootfs.mount: Deactivated successfully. Dec 16 13:14:15.852185 systemd[1]: cri-containerd-5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba.scope: Deactivated successfully. Dec 16 13:14:15.852527 systemd[1]: cri-containerd-5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba.scope: Consumed 6.216s CPU time, 122.6M memory peak, 200K read from disk, 14.8M written to disk. Dec 16 13:14:15.854368 containerd[1576]: time="2025-12-16T13:14:15.854313430Z" level=info msg="received container exit event container_id:\"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" id:\"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" pid:3396 exited_at:{seconds:1765890855 nanos:853984768}" Dec 16 13:14:15.859964 containerd[1576]: time="2025-12-16T13:14:15.859202819Z" level=info msg="StopContainer for \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" returns successfully" Dec 16 13:14:15.861962 containerd[1576]: time="2025-12-16T13:14:15.861938598Z" level=info msg="StopPodSandbox for \"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\"" Dec 16 13:14:15.865042 containerd[1576]: time="2025-12-16T13:14:15.864992098Z" level=info msg="Container to stop \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:15.871479 systemd[1]: cri-containerd-a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb.scope: Deactivated successfully. Dec 16 13:14:15.875923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba-rootfs.mount: Deactivated successfully. Dec 16 13:14:15.877316 containerd[1576]: time="2025-12-16T13:14:15.877267304Z" level=info msg="received sandbox exit event container_id:\"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\" id:\"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\" exit_status:137 exited_at:{seconds:1765890855 nanos:877082820}" monitor_name=podsandbox Dec 16 13:14:15.892122 containerd[1576]: time="2025-12-16T13:14:15.892087474Z" level=info msg="StopContainer for \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" returns successfully" Dec 16 13:14:15.892715 containerd[1576]: time="2025-12-16T13:14:15.892681746Z" level=info msg="StopPodSandbox for \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\"" Dec 16 13:14:15.892856 containerd[1576]: time="2025-12-16T13:14:15.892733396Z" level=info msg="Container to stop \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:15.892856 containerd[1576]: time="2025-12-16T13:14:15.892744166Z" level=info msg="Container to stop \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:15.892856 containerd[1576]: time="2025-12-16T13:14:15.892751751Z" level=info msg="Container to stop \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:15.892856 containerd[1576]: time="2025-12-16T13:14:15.892759596Z" level=info msg="Container to stop \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:15.892856 containerd[1576]: time="2025-12-16T13:14:15.892767401Z" level=info msg="Container to stop \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:15.900227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb-rootfs.mount: Deactivated successfully. Dec 16 13:14:15.901924 containerd[1576]: time="2025-12-16T13:14:15.901891180Z" level=info msg="received sandbox exit event container_id:\"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" id:\"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" exit_status:137 exited_at:{seconds:1765890855 nanos:901695383}" monitor_name=podsandbox Dec 16 13:14:15.901916 systemd[1]: cri-containerd-7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090.scope: Deactivated successfully. Dec 16 13:14:15.905856 containerd[1576]: time="2025-12-16T13:14:15.905803511Z" level=info msg="shim disconnected" id=a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb namespace=k8s.io Dec 16 13:14:15.905999 containerd[1576]: time="2025-12-16T13:14:15.905937448Z" level=warning msg="cleaning up after shim disconnected" id=a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb namespace=k8s.io Dec 16 13:14:15.923775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090-rootfs.mount: Deactivated successfully. Dec 16 13:14:15.929910 containerd[1576]: time="2025-12-16T13:14:15.905950604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:14:15.930045 containerd[1576]: time="2025-12-16T13:14:15.928682502Z" level=info msg="shim disconnected" id=7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090 namespace=k8s.io Dec 16 13:14:15.930045 containerd[1576]: time="2025-12-16T13:14:15.929977060Z" level=warning msg="cleaning up after shim disconnected" id=7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090 namespace=k8s.io Dec 16 13:14:15.930045 containerd[1576]: time="2025-12-16T13:14:15.929984885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:14:15.950530 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090-shm.mount: Deactivated successfully. Dec 16 13:14:15.972015 containerd[1576]: time="2025-12-16T13:14:15.971966257Z" level=info msg="TearDown network for sandbox \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" successfully" Dec 16 13:14:15.972015 containerd[1576]: time="2025-12-16T13:14:15.972004651Z" level=info msg="StopPodSandbox for \"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" returns successfully" Dec 16 13:14:15.972463 containerd[1576]: time="2025-12-16T13:14:15.972413908Z" level=info msg="received sandbox container exit event sandbox_id:\"7ac08c2b875ad27a128671652a54df6b23cc98be8a5b9cb1c538749b331e1090\" exit_status:137 exited_at:{seconds:1765890855 nanos:901695383}" monitor_name=criService Dec 16 13:14:15.978446 containerd[1576]: time="2025-12-16T13:14:15.978410315Z" level=info msg="received sandbox container exit event sandbox_id:\"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\" exit_status:137 exited_at:{seconds:1765890855 nanos:877082820}" monitor_name=criService Dec 16 13:14:15.978618 containerd[1576]: time="2025-12-16T13:14:15.978585732Z" level=info msg="TearDown network for sandbox \"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\" successfully" Dec 16 13:14:15.978618 containerd[1576]: time="2025-12-16T13:14:15.978606612Z" level=info msg="StopPodSandbox for \"a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb\" returns successfully" Dec 16 13:14:16.030716 kubelet[2735]: I1216 13:14:16.030596 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/971bc456-1c69-4fbf-b9fd-7bdaa3821617-clustermesh-secrets\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.030716 kubelet[2735]: I1216 13:14:16.030633 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hostproc\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.030716 kubelet[2735]: I1216 13:14:16.030650 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-kernel\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.030716 kubelet[2735]: I1216 13:14:16.030672 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-net\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.030716 kubelet[2735]: I1216 13:14:16.030685 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-bpf-maps\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.030716 kubelet[2735]: I1216 13:14:16.030697 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-run\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031220 kubelet[2735]: I1216 13:14:16.030718 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdzhk\" (UniqueName: \"kubernetes.io/projected/36a48443-a964-42a7-b664-99c23de3dd2d-kube-api-access-wdzhk\") pod \"36a48443-a964-42a7-b664-99c23de3dd2d\" (UID: \"36a48443-a964-42a7-b664-99c23de3dd2d\") " Dec 16 13:14:16.031220 kubelet[2735]: I1216 13:14:16.030734 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hubble-tls\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031220 kubelet[2735]: I1216 13:14:16.030744 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.031220 kubelet[2735]: I1216 13:14:16.030748 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cni-path\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031220 kubelet[2735]: I1216 13:14:16.030787 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-config-path\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031338 kubelet[2735]: I1216 13:14:16.030797 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hostproc" (OuterVolumeSpecName: "hostproc") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.031338 kubelet[2735]: I1216 13:14:16.030807 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxgms\" (UniqueName: \"kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-kube-api-access-zxgms\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031338 kubelet[2735]: I1216 13:14:16.030846 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-lib-modules\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031338 kubelet[2735]: I1216 13:14:16.030853 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cni-path" (OuterVolumeSpecName: "cni-path") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.031338 kubelet[2735]: I1216 13:14:16.030859 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-cgroup\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031338 kubelet[2735]: I1216 13:14:16.030877 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a48443-a964-42a7-b664-99c23de3dd2d-cilium-config-path\") pod \"36a48443-a964-42a7-b664-99c23de3dd2d\" (UID: \"36a48443-a964-42a7-b664-99c23de3dd2d\") " Dec 16 13:14:16.031490 kubelet[2735]: I1216 13:14:16.030892 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-etc-cni-netd\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031490 kubelet[2735]: I1216 13:14:16.030908 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-xtables-lock\") pod \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\" (UID: \"971bc456-1c69-4fbf-b9fd-7bdaa3821617\") " Dec 16 13:14:16.031490 kubelet[2735]: I1216 13:14:16.030935 2735 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.031490 kubelet[2735]: I1216 13:14:16.030944 2735 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.031490 kubelet[2735]: I1216 13:14:16.030953 2735 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.031490 kubelet[2735]: I1216 13:14:16.030988 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.034605 kubelet[2735]: I1216 13:14:16.034556 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:14:16.034681 kubelet[2735]: I1216 13:14:16.034633 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.034712 kubelet[2735]: I1216 13:14:16.034691 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971bc456-1c69-4fbf-b9fd-7bdaa3821617-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:14:16.034980 kubelet[2735]: I1216 13:14:16.034936 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:14:16.034980 kubelet[2735]: I1216 13:14:16.034982 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.035121 kubelet[2735]: I1216 13:14:16.035002 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.035121 kubelet[2735]: I1216 13:14:16.035019 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.035121 kubelet[2735]: I1216 13:14:16.035033 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.035121 kubelet[2735]: I1216 13:14:16.035048 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:16.035704 kubelet[2735]: I1216 13:14:16.035649 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a48443-a964-42a7-b664-99c23de3dd2d-kube-api-access-wdzhk" (OuterVolumeSpecName: "kube-api-access-wdzhk") pod "36a48443-a964-42a7-b664-99c23de3dd2d" (UID: "36a48443-a964-42a7-b664-99c23de3dd2d"). InnerVolumeSpecName "kube-api-access-wdzhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:14:16.037211 kubelet[2735]: I1216 13:14:16.037162 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-kube-api-access-zxgms" (OuterVolumeSpecName: "kube-api-access-zxgms") pod "971bc456-1c69-4fbf-b9fd-7bdaa3821617" (UID: "971bc456-1c69-4fbf-b9fd-7bdaa3821617"). InnerVolumeSpecName "kube-api-access-zxgms". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:14:16.038280 kubelet[2735]: I1216 13:14:16.038248 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a48443-a964-42a7-b664-99c23de3dd2d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "36a48443-a964-42a7-b664-99c23de3dd2d" (UID: "36a48443-a964-42a7-b664-99c23de3dd2d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:14:16.132027 kubelet[2735]: I1216 13:14:16.131978 2735 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132027 kubelet[2735]: I1216 13:14:16.132004 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132027 kubelet[2735]: I1216 13:14:16.132016 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wdzhk\" (UniqueName: \"kubernetes.io/projected/36a48443-a964-42a7-b664-99c23de3dd2d-kube-api-access-wdzhk\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132027 kubelet[2735]: I1216 13:14:16.132029 2735 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132027 kubelet[2735]: I1216 13:14:16.132037 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132046 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zxgms\" (UniqueName: \"kubernetes.io/projected/971bc456-1c69-4fbf-b9fd-7bdaa3821617-kube-api-access-zxgms\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132055 2735 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132063 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132072 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36a48443-a964-42a7-b664-99c23de3dd2d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132081 2735 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132088 2735 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132095 2735 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/971bc456-1c69-4fbf-b9fd-7bdaa3821617-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.132233 kubelet[2735]: I1216 13:14:16.132103 2735 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/971bc456-1c69-4fbf-b9fd-7bdaa3821617-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 16 13:14:16.629019 kubelet[2735]: I1216 13:14:16.628758 2735 scope.go:117] "RemoveContainer" containerID="c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808" Dec 16 13:14:16.631676 containerd[1576]: time="2025-12-16T13:14:16.631641421Z" level=info msg="RemoveContainer for \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\"" Dec 16 13:14:16.638120 systemd[1]: Removed slice kubepods-besteffort-pod36a48443_a964_42a7_b664_99c23de3dd2d.slice - libcontainer container kubepods-besteffort-pod36a48443_a964_42a7_b664_99c23de3dd2d.slice. Dec 16 13:14:16.640521 systemd[1]: Removed slice kubepods-burstable-pod971bc456_1c69_4fbf_b9fd_7bdaa3821617.slice - libcontainer container kubepods-burstable-pod971bc456_1c69_4fbf_b9fd_7bdaa3821617.slice. Dec 16 13:14:16.640719 systemd[1]: kubepods-burstable-pod971bc456_1c69_4fbf_b9fd_7bdaa3821617.slice: Consumed 6.330s CPU time, 123M memory peak, 216K read from disk, 14.8M written to disk. Dec 16 13:14:16.652070 containerd[1576]: time="2025-12-16T13:14:16.652016193Z" level=info msg="RemoveContainer for \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" returns successfully" Dec 16 13:14:16.652341 kubelet[2735]: I1216 13:14:16.652299 2735 scope.go:117] "RemoveContainer" containerID="c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808" Dec 16 13:14:16.658272 containerd[1576]: time="2025-12-16T13:14:16.652568554Z" level=error msg="ContainerStatus for \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\": not found" Dec 16 13:14:16.658459 kubelet[2735]: E1216 13:14:16.658412 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\": not found" containerID="c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808" Dec 16 13:14:16.658459 kubelet[2735]: I1216 13:14:16.658450 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808"} err="failed to get container status \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3fe53bb80d713e67186014f5cdcd3ee3307d2e94863b4f30634c585b9536808\": not found" Dec 16 13:14:16.658625 kubelet[2735]: I1216 13:14:16.658482 2735 scope.go:117] "RemoveContainer" containerID="5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba" Dec 16 13:14:16.660440 containerd[1576]: time="2025-12-16T13:14:16.660407123Z" level=info msg="RemoveContainer for \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\"" Dec 16 13:14:16.667489 containerd[1576]: time="2025-12-16T13:14:16.667447879Z" level=info msg="RemoveContainer for \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" returns successfully" Dec 16 13:14:16.667644 kubelet[2735]: I1216 13:14:16.667617 2735 scope.go:117] "RemoveContainer" containerID="f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30" Dec 16 13:14:16.668745 containerd[1576]: time="2025-12-16T13:14:16.668707558Z" level=info msg="RemoveContainer for \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\"" Dec 16 13:14:16.672962 containerd[1576]: time="2025-12-16T13:14:16.672931553Z" level=info msg="RemoveContainer for \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\" returns successfully" Dec 16 13:14:16.673146 kubelet[2735]: I1216 13:14:16.673112 2735 scope.go:117] "RemoveContainer" containerID="2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f" Dec 16 13:14:16.675906 containerd[1576]: time="2025-12-16T13:14:16.675177618Z" level=info msg="RemoveContainer for \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\"" Dec 16 13:14:16.679649 containerd[1576]: time="2025-12-16T13:14:16.679611236Z" level=info msg="RemoveContainer for \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\" returns successfully" Dec 16 13:14:16.679802 kubelet[2735]: I1216 13:14:16.679782 2735 scope.go:117] "RemoveContainer" containerID="ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2" Dec 16 13:14:16.680983 containerd[1576]: time="2025-12-16T13:14:16.680951781Z" level=info msg="RemoveContainer for \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\"" Dec 16 13:14:16.684832 containerd[1576]: time="2025-12-16T13:14:16.684790215Z" level=info msg="RemoveContainer for \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\" returns successfully" Dec 16 13:14:16.684948 kubelet[2735]: I1216 13:14:16.684919 2735 scope.go:117] "RemoveContainer" containerID="442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a" Dec 16 13:14:16.686081 containerd[1576]: time="2025-12-16T13:14:16.686057279Z" level=info msg="RemoveContainer for \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\"" Dec 16 13:14:16.690835 containerd[1576]: time="2025-12-16T13:14:16.689791373Z" level=info msg="RemoveContainer for \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\" returns successfully" Dec 16 13:14:16.690931 kubelet[2735]: I1216 13:14:16.690911 2735 scope.go:117] "RemoveContainer" containerID="5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba" Dec 16 13:14:16.692985 containerd[1576]: time="2025-12-16T13:14:16.692945131Z" level=error msg="ContainerStatus for \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\": not found" Dec 16 13:14:16.693126 kubelet[2735]: E1216 13:14:16.693100 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\": not found" containerID="5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba" Dec 16 13:14:16.693162 kubelet[2735]: I1216 13:14:16.693128 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba"} err="failed to get container status \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bc4163389eb891ae11aa410bae319f3372ac8975d00c3faf408648d5a9ec0ba\": not found" Dec 16 13:14:16.693162 kubelet[2735]: I1216 13:14:16.693147 2735 scope.go:117] "RemoveContainer" containerID="f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30" Dec 16 13:14:16.693405 containerd[1576]: time="2025-12-16T13:14:16.693348186Z" level=error msg="ContainerStatus for \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\": not found" Dec 16 13:14:16.693537 kubelet[2735]: E1216 13:14:16.693482 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\": not found" containerID="f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30" Dec 16 13:14:16.693537 kubelet[2735]: I1216 13:14:16.693513 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30"} err="failed to get container status \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\": rpc error: code = NotFound desc = an error occurred when try to find container \"f88c90b69c96895c25e63b9adefaf93088c12e01479c90397b6fe268add68c30\": not found" Dec 16 13:14:16.693537 kubelet[2735]: I1216 13:14:16.693536 2735 scope.go:117] "RemoveContainer" containerID="2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f" Dec 16 13:14:16.693890 containerd[1576]: time="2025-12-16T13:14:16.693853467Z" level=error msg="ContainerStatus for \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\": not found" Dec 16 13:14:16.694010 kubelet[2735]: E1216 13:14:16.693983 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\": not found" containerID="2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f" Dec 16 13:14:16.694050 kubelet[2735]: I1216 13:14:16.694014 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f"} err="failed to get container status \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e8a590dc39ac3ef1a96df3ce312c223a2f1131aec66130f26fdb2f52813c36f\": not found" Dec 16 13:14:16.694050 kubelet[2735]: I1216 13:14:16.694037 2735 scope.go:117] "RemoveContainer" containerID="ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2" Dec 16 13:14:16.694238 containerd[1576]: time="2025-12-16T13:14:16.694207276Z" level=error msg="ContainerStatus for \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\": not found" Dec 16 13:14:16.694356 kubelet[2735]: E1216 13:14:16.694333 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\": not found" containerID="ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2" Dec 16 13:14:16.694390 kubelet[2735]: I1216 13:14:16.694355 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2"} err="failed to get container status \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca2f03eb0688ef33cf0b698a183ad3c44ae8d4591db9dfca5490a0295a22e9e2\": not found" Dec 16 13:14:16.694390 kubelet[2735]: I1216 13:14:16.694367 2735 scope.go:117] "RemoveContainer" containerID="442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a" Dec 16 13:14:16.694541 containerd[1576]: time="2025-12-16T13:14:16.694511710Z" level=error msg="ContainerStatus for \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\": not found" Dec 16 13:14:16.694613 kubelet[2735]: E1216 13:14:16.694590 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\": not found" containerID="442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a" Dec 16 13:14:16.694613 kubelet[2735]: I1216 13:14:16.694603 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a"} err="failed to get container status \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\": rpc error: code = NotFound desc = an error occurred when try to find container \"442552cd3b8a57dda6d2f34bcfb3df531941177ce4212f0fe171847fe26a144a\": not found" Dec 16 13:14:16.836789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a969d708505568a2aef421d386337281cb0ae1d707389fff607546f4291d3dbb-shm.mount: Deactivated successfully. Dec 16 13:14:16.836937 systemd[1]: var-lib-kubelet-pods-36a48443\x2da964\x2d42a7\x2db664\x2d99c23de3dd2d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdzhk.mount: Deactivated successfully. Dec 16 13:14:16.837014 systemd[1]: var-lib-kubelet-pods-971bc456\x2d1c69\x2d4fbf\x2db9fd\x2d7bdaa3821617-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzxgms.mount: Deactivated successfully. Dec 16 13:14:16.837086 systemd[1]: var-lib-kubelet-pods-971bc456\x2d1c69\x2d4fbf\x2db9fd\x2d7bdaa3821617-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:14:16.837159 systemd[1]: var-lib-kubelet-pods-971bc456\x2d1c69\x2d4fbf\x2db9fd\x2d7bdaa3821617-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:14:17.384841 kubelet[2735]: I1216 13:14:17.384774 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a48443-a964-42a7-b664-99c23de3dd2d" path="/var/lib/kubelet/pods/36a48443-a964-42a7-b664-99c23de3dd2d/volumes" Dec 16 13:14:17.385351 kubelet[2735]: I1216 13:14:17.385323 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971bc456-1c69-4fbf-b9fd-7bdaa3821617" path="/var/lib/kubelet/pods/971bc456-1c69-4fbf-b9fd-7bdaa3821617/volumes" Dec 16 13:14:17.742670 sshd[4358]: Connection closed by 10.0.0.1 port 43322 Dec 16 13:14:17.743064 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:17.752594 systemd[1]: sshd@23-10.0.0.147:22-10.0.0.1:43322.service: Deactivated successfully. Dec 16 13:14:17.754549 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:14:17.755368 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:14:17.758771 systemd[1]: Started sshd@24-10.0.0.147:22-10.0.0.1:43326.service - OpenSSH per-connection server daemon (10.0.0.1:43326). Dec 16 13:14:17.759929 systemd-logind[1561]: Removed session 24. Dec 16 13:14:17.815036 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 43326 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:14:17.816312 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:17.820743 systemd-logind[1561]: New session 25 of user core. Dec 16 13:14:17.836964 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:14:18.443310 sshd[4506]: Connection closed by 10.0.0.1 port 43326 Dec 16 13:14:18.444072 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:18.458006 systemd[1]: sshd@24-10.0.0.147:22-10.0.0.1:43326.service: Deactivated successfully. Dec 16 13:14:18.463249 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:14:18.464568 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:14:18.472041 systemd[1]: Started sshd@25-10.0.0.147:22-10.0.0.1:43334.service - OpenSSH per-connection server daemon (10.0.0.1:43334). Dec 16 13:14:18.473901 systemd-logind[1561]: Removed session 25. Dec 16 13:14:18.484440 systemd[1]: Created slice kubepods-burstable-pod907f91be_1880_419d_96c7_4a5fa2b3af95.slice - libcontainer container kubepods-burstable-pod907f91be_1880_419d_96c7_4a5fa2b3af95.slice. Dec 16 13:14:18.525438 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 43334 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:14:18.527114 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:18.531246 systemd-logind[1561]: New session 26 of user core. Dec 16 13:14:18.543962 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:14:18.546801 kubelet[2735]: I1216 13:14:18.546773 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-host-proc-sys-kernel\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547058 kubelet[2735]: I1216 13:14:18.546804 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/907f91be-1880-419d-96c7-4a5fa2b3af95-hubble-tls\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547058 kubelet[2735]: I1216 13:14:18.546833 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-hostproc\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547058 kubelet[2735]: I1216 13:14:18.546847 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/907f91be-1880-419d-96c7-4a5fa2b3af95-cilium-config-path\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547058 kubelet[2735]: I1216 13:14:18.546873 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-cilium-cgroup\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547058 kubelet[2735]: I1216 13:14:18.546887 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/907f91be-1880-419d-96c7-4a5fa2b3af95-clustermesh-secrets\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547058 kubelet[2735]: I1216 13:14:18.546901 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-host-proc-sys-net\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547197 kubelet[2735]: I1216 13:14:18.546936 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-xtables-lock\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547197 kubelet[2735]: I1216 13:14:18.546949 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xls5f\" (UniqueName: \"kubernetes.io/projected/907f91be-1880-419d-96c7-4a5fa2b3af95-kube-api-access-xls5f\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547197 kubelet[2735]: I1216 13:14:18.546964 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-cni-path\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547197 kubelet[2735]: I1216 13:14:18.546976 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-etc-cni-netd\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547197 kubelet[2735]: I1216 13:14:18.546990 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-lib-modules\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547197 kubelet[2735]: I1216 13:14:18.547018 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/907f91be-1880-419d-96c7-4a5fa2b3af95-cilium-ipsec-secrets\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547338 kubelet[2735]: I1216 13:14:18.547042 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-cilium-run\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.547338 kubelet[2735]: I1216 13:14:18.547063 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/907f91be-1880-419d-96c7-4a5fa2b3af95-bpf-maps\") pod \"cilium-w5z5g\" (UID: \"907f91be-1880-419d-96c7-4a5fa2b3af95\") " pod="kube-system/cilium-w5z5g" Dec 16 13:14:18.594976 sshd[4521]: Connection closed by 10.0.0.1 port 43334 Dec 16 13:14:18.595379 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:18.609583 systemd[1]: sshd@25-10.0.0.147:22-10.0.0.1:43334.service: Deactivated successfully. Dec 16 13:14:18.611494 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:14:18.612355 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:14:18.615098 systemd[1]: Started sshd@26-10.0.0.147:22-10.0.0.1:43340.service - OpenSSH per-connection server daemon (10.0.0.1:43340). Dec 16 13:14:18.615755 systemd-logind[1561]: Removed session 26. Dec 16 13:14:18.674538 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 43340 ssh2: RSA SHA256:Wn63AtyvivOtj7nJWnKublRzH8Q6eLENL+IqD3nMnzs Dec 16 13:14:18.676166 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:18.680116 systemd-logind[1561]: New session 27 of user core. Dec 16 13:14:18.689970 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:14:18.799712 containerd[1576]: time="2025-12-16T13:14:18.799552647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5z5g,Uid:907f91be-1880-419d-96c7-4a5fa2b3af95,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:18.814576 containerd[1576]: time="2025-12-16T13:14:18.814533771Z" level=info msg="connecting to shim 29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e" address="unix:///run/containerd/s/7532a09fecf843c62bf076eddeb116d31283b4dd95706eb6cd8d5f44278f1a81" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:18.840940 systemd[1]: Started cri-containerd-29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e.scope - libcontainer container 29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e. Dec 16 13:14:18.865796 containerd[1576]: time="2025-12-16T13:14:18.865766343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5z5g,Uid:907f91be-1880-419d-96c7-4a5fa2b3af95,Namespace:kube-system,Attempt:0,} returns sandbox id \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\"" Dec 16 13:14:18.871513 containerd[1576]: time="2025-12-16T13:14:18.871467122Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:14:18.878361 containerd[1576]: time="2025-12-16T13:14:18.878322837Z" level=info msg="Container 25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:18.884544 containerd[1576]: time="2025-12-16T13:14:18.884502764Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1\"" Dec 16 13:14:18.885512 containerd[1576]: time="2025-12-16T13:14:18.884879346Z" level=info msg="StartContainer for \"25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1\"" Dec 16 13:14:18.885638 containerd[1576]: time="2025-12-16T13:14:18.885616491Z" level=info msg="connecting to shim 25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1" address="unix:///run/containerd/s/7532a09fecf843c62bf076eddeb116d31283b4dd95706eb6cd8d5f44278f1a81" protocol=ttrpc version=3 Dec 16 13:14:18.909974 systemd[1]: Started cri-containerd-25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1.scope - libcontainer container 25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1. Dec 16 13:14:18.939668 containerd[1576]: time="2025-12-16T13:14:18.939620210Z" level=info msg="StartContainer for \"25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1\" returns successfully" Dec 16 13:14:18.948435 systemd[1]: cri-containerd-25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1.scope: Deactivated successfully. Dec 16 13:14:18.949682 containerd[1576]: time="2025-12-16T13:14:18.949641660Z" level=info msg="received container exit event container_id:\"25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1\" id:\"25ba9d2cd136ab2d619bf9e7309fa04706317297142296d47fc380ff1b4673c1\" pid:4602 exited_at:{seconds:1765890858 nanos:949326595}" Dec 16 13:14:19.646836 containerd[1576]: time="2025-12-16T13:14:19.646771202Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:14:19.663093 containerd[1576]: time="2025-12-16T13:14:19.663054648Z" level=info msg="Container 8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:19.666118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094695745.mount: Deactivated successfully. Dec 16 13:14:19.669430 containerd[1576]: time="2025-12-16T13:14:19.669391039Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104\"" Dec 16 13:14:19.669979 containerd[1576]: time="2025-12-16T13:14:19.669947957Z" level=info msg="StartContainer for \"8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104\"" Dec 16 13:14:19.670839 containerd[1576]: time="2025-12-16T13:14:19.670802175Z" level=info msg="connecting to shim 8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104" address="unix:///run/containerd/s/7532a09fecf843c62bf076eddeb116d31283b4dd95706eb6cd8d5f44278f1a81" protocol=ttrpc version=3 Dec 16 13:14:19.696021 systemd[1]: Started cri-containerd-8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104.scope - libcontainer container 8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104. Dec 16 13:14:19.727204 containerd[1576]: time="2025-12-16T13:14:19.727164327Z" level=info msg="StartContainer for \"8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104\" returns successfully" Dec 16 13:14:19.732127 systemd[1]: cri-containerd-8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104.scope: Deactivated successfully. Dec 16 13:14:19.732841 containerd[1576]: time="2025-12-16T13:14:19.732786698Z" level=info msg="received container exit event container_id:\"8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104\" id:\"8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104\" pid:4650 exited_at:{seconds:1765890859 nanos:732488696}" Dec 16 13:14:19.753168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b6a4ff7fff5e57ae873263148e7674103ef4ddc4dc083fa645e75e79d832104-rootfs.mount: Deactivated successfully. Dec 16 13:14:20.480275 kubelet[2735]: E1216 13:14:20.480193 2735 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:14:20.714359 containerd[1576]: time="2025-12-16T13:14:20.714309492Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:14:20.726312 containerd[1576]: time="2025-12-16T13:14:20.726265745Z" level=info msg="Container 94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:20.730451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631883325.mount: Deactivated successfully. Dec 16 13:14:20.738247 containerd[1576]: time="2025-12-16T13:14:20.738191101Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7\"" Dec 16 13:14:20.738909 containerd[1576]: time="2025-12-16T13:14:20.738882345Z" level=info msg="StartContainer for \"94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7\"" Dec 16 13:14:20.740507 containerd[1576]: time="2025-12-16T13:14:20.740414702Z" level=info msg="connecting to shim 94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7" address="unix:///run/containerd/s/7532a09fecf843c62bf076eddeb116d31283b4dd95706eb6cd8d5f44278f1a81" protocol=ttrpc version=3 Dec 16 13:14:20.767959 systemd[1]: Started cri-containerd-94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7.scope - libcontainer container 94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7. Dec 16 13:14:20.855265 containerd[1576]: time="2025-12-16T13:14:20.855225332Z" level=info msg="StartContainer for \"94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7\" returns successfully" Dec 16 13:14:20.857167 systemd[1]: cri-containerd-94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7.scope: Deactivated successfully. Dec 16 13:14:20.858262 containerd[1576]: time="2025-12-16T13:14:20.858217036Z" level=info msg="received container exit event container_id:\"94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7\" id:\"94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7\" pid:4692 exited_at:{seconds:1765890860 nanos:858007073}" Dec 16 13:14:20.880666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94959385ed2335a3cf8a3989d8447f3512114298e93b14ceda12ab4f9e3a07b7-rootfs.mount: Deactivated successfully. Dec 16 13:14:21.658835 containerd[1576]: time="2025-12-16T13:14:21.658778315Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:14:21.672085 containerd[1576]: time="2025-12-16T13:14:21.672029728Z" level=info msg="Container bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:21.679558 containerd[1576]: time="2025-12-16T13:14:21.679520776Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710\"" Dec 16 13:14:21.680158 containerd[1576]: time="2025-12-16T13:14:21.680122379Z" level=info msg="StartContainer for \"bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710\"" Dec 16 13:14:21.680836 containerd[1576]: time="2025-12-16T13:14:21.680793975Z" level=info msg="connecting to shim bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710" address="unix:///run/containerd/s/7532a09fecf843c62bf076eddeb116d31283b4dd95706eb6cd8d5f44278f1a81" protocol=ttrpc version=3 Dec 16 13:14:21.697615 systemd[1]: Started cri-containerd-bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710.scope - libcontainer container bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710. Dec 16 13:14:21.730719 systemd[1]: cri-containerd-bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710.scope: Deactivated successfully. Dec 16 13:14:21.731547 containerd[1576]: time="2025-12-16T13:14:21.731514180Z" level=info msg="StartContainer for \"bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710\" returns successfully" Dec 16 13:14:21.733460 containerd[1576]: time="2025-12-16T13:14:21.733415061Z" level=info msg="received container exit event container_id:\"bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710\" id:\"bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710\" pid:4732 exited_at:{seconds:1765890861 nanos:733150655}" Dec 16 13:14:21.753055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd67caa3e36fd87b58471291b53b496c36b99fd8fde339b4ed36af09751e3710-rootfs.mount: Deactivated successfully. Dec 16 13:14:22.666951 containerd[1576]: time="2025-12-16T13:14:22.666905566Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:14:22.682763 containerd[1576]: time="2025-12-16T13:14:22.682706357Z" level=info msg="Container fd7d0a5c5578e8b0900513882565370f7396ad3c4753676a9a905f0f26284f7b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:22.686445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206238373.mount: Deactivated successfully. Dec 16 13:14:22.690085 containerd[1576]: time="2025-12-16T13:14:22.690045618Z" level=info msg="CreateContainer within sandbox \"29494d6818961f28b276dffb99c0ed2d0d46a137855f3a71b0f65e2620ecf35e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fd7d0a5c5578e8b0900513882565370f7396ad3c4753676a9a905f0f26284f7b\"" Dec 16 13:14:22.691433 containerd[1576]: time="2025-12-16T13:14:22.690530156Z" level=info msg="StartContainer for \"fd7d0a5c5578e8b0900513882565370f7396ad3c4753676a9a905f0f26284f7b\"" Dec 16 13:14:22.691433 containerd[1576]: time="2025-12-16T13:14:22.691306792Z" level=info msg="connecting to shim fd7d0a5c5578e8b0900513882565370f7396ad3c4753676a9a905f0f26284f7b" address="unix:///run/containerd/s/7532a09fecf843c62bf076eddeb116d31283b4dd95706eb6cd8d5f44278f1a81" protocol=ttrpc version=3 Dec 16 13:14:22.708952 systemd[1]: Started cri-containerd-fd7d0a5c5578e8b0900513882565370f7396ad3c4753676a9a905f0f26284f7b.scope - libcontainer container fd7d0a5c5578e8b0900513882565370f7396ad3c4753676a9a905f0f26284f7b. Dec 16 13:14:22.755953 containerd[1576]: time="2025-12-16T13:14:22.755911186Z" level=info msg="StartContainer for \"fd7d0a5c5578e8b0900513882565370f7396ad3c4753676a9a905f0f26284f7b\" returns successfully" Dec 16 13:14:23.152857 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:14:23.679360 kubelet[2735]: I1216 13:14:23.679303 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w5z5g" podStartSLOduration=5.67928843 podStartE2EDuration="5.67928843s" podCreationTimestamp="2025-12-16 13:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:23.678919424 +0000 UTC m=+88.395510531" watchObservedRunningTime="2025-12-16 13:14:23.67928843 +0000 UTC m=+88.395879537" Dec 16 13:14:26.263710 systemd-networkd[1483]: lxc_health: Link UP Dec 16 13:14:26.264109 systemd-networkd[1483]: lxc_health: Gained carrier Dec 16 13:14:27.062763 kubelet[2735]: E1216 13:14:27.062726 2735 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49862->127.0.0.1:45837: write tcp 127.0.0.1:49862->127.0.0.1:45837: write: broken pipe Dec 16 13:14:28.052049 systemd-networkd[1483]: lxc_health: Gained IPv6LL Dec 16 13:14:33.331533 sshd[4536]: Connection closed by 10.0.0.1 port 43340 Dec 16 13:14:33.331894 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:33.336147 systemd[1]: sshd@26-10.0.0.147:22-10.0.0.1:43340.service: Deactivated successfully. Dec 16 13:14:33.338000 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:14:33.338994 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:14:33.340061 systemd-logind[1561]: Removed session 27.