Sep 12 05:48:09.890117 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 04:02:32 -00 2025 Sep 12 05:48:09.890165 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:48:09.890179 kernel: BIOS-provided physical RAM map: Sep 12 05:48:09.890186 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 05:48:09.890192 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 05:48:09.890199 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 05:48:09.890207 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 05:48:09.890214 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 05:48:09.890223 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 05:48:09.890244 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 05:48:09.890251 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 12 05:48:09.890258 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 05:48:09.890265 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 05:48:09.890272 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 05:48:09.890280 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 05:48:09.890289 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 05:48:09.890299 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 05:48:09.890316 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 05:48:09.890324 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 05:48:09.890333 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 05:48:09.890340 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 05:48:09.890348 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 05:48:09.890355 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 05:48:09.890362 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 05:48:09.890369 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 05:48:09.890379 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 05:48:09.890386 kernel: NX (Execute Disable) protection: active Sep 12 05:48:09.890393 kernel: APIC: Static calls initialized Sep 12 05:48:09.890400 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 12 05:48:09.890407 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 12 05:48:09.890414 kernel: extended physical RAM map: Sep 12 05:48:09.890421 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 05:48:09.890428 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 05:48:09.890436 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 05:48:09.890443 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 05:48:09.890450 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 05:48:09.890460 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 05:48:09.890467 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 05:48:09.890474 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 12 05:48:09.890481 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 12 05:48:09.890492 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 12 05:48:09.890499 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 12 05:48:09.890509 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 12 05:48:09.890516 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 05:48:09.890524 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 05:48:09.890531 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 05:48:09.890539 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 05:48:09.890546 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 05:48:09.890553 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 05:48:09.890561 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 05:48:09.890568 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 05:48:09.890576 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 05:48:09.890585 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 05:48:09.890593 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 05:48:09.890600 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 05:48:09.890608 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 05:48:09.890615 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 05:48:09.890623 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 05:48:09.890632 kernel: efi: EFI v2.7 by EDK II Sep 12 05:48:09.890640 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 12 05:48:09.890647 kernel: random: crng init done Sep 12 05:48:09.890657 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 12 05:48:09.890665 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 12 05:48:09.890676 kernel: secureboot: Secure boot disabled Sep 12 05:48:09.890684 kernel: SMBIOS 2.8 present. Sep 12 05:48:09.890691 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 05:48:09.890699 kernel: DMI: Memory slots populated: 1/1 Sep 12 05:48:09.890706 kernel: Hypervisor detected: KVM Sep 12 05:48:09.890713 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 05:48:09.890721 kernel: kvm-clock: using sched offset of 5378351025 cycles Sep 12 05:48:09.890729 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 05:48:09.890737 kernel: tsc: Detected 2794.748 MHz processor Sep 12 05:48:09.890745 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 05:48:09.890752 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 05:48:09.890762 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 12 05:48:09.890770 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 05:48:09.890778 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 05:48:09.890786 kernel: Using GB pages for direct mapping Sep 12 05:48:09.890794 kernel: ACPI: Early table checksum verification disabled Sep 12 05:48:09.890801 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 05:48:09.890809 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 05:48:09.890817 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:48:09.890825 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:48:09.890834 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 05:48:09.890842 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:48:09.890850 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:48:09.890857 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:48:09.890865 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:48:09.890873 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 05:48:09.890881 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 05:48:09.890889 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 05:48:09.890897 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 05:48:09.890906 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 05:48:09.890914 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 05:48:09.890921 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 05:48:09.890929 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 05:48:09.890937 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 05:48:09.890944 kernel: No NUMA configuration found Sep 12 05:48:09.890952 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 12 05:48:09.890959 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 12 05:48:09.890967 kernel: Zone ranges: Sep 12 05:48:09.890980 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 05:48:09.890993 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 12 05:48:09.891008 kernel: Normal empty Sep 12 05:48:09.891027 kernel: Device empty Sep 12 05:48:09.891043 kernel: Movable zone start for each node Sep 12 05:48:09.891051 kernel: Early memory node ranges Sep 12 05:48:09.891058 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 05:48:09.891066 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 05:48:09.891076 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 05:48:09.891084 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 12 05:48:09.891094 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 12 05:48:09.891102 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 12 05:48:09.891109 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 12 05:48:09.891117 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 12 05:48:09.891125 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 12 05:48:09.891132 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 05:48:09.891142 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 05:48:09.891159 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 05:48:09.891167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 05:48:09.891174 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 12 05:48:09.891182 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 12 05:48:09.891190 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 05:48:09.891200 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 05:48:09.891208 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 12 05:48:09.891216 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 05:48:09.891236 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 05:48:09.891244 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 05:48:09.891255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 05:48:09.891263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 05:48:09.891271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 05:48:09.891279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 05:48:09.891287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 05:48:09.891295 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 05:48:09.891303 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 05:48:09.891317 kernel: TSC deadline timer available Sep 12 05:48:09.891327 kernel: CPU topo: Max. logical packages: 1 Sep 12 05:48:09.891335 kernel: CPU topo: Max. logical dies: 1 Sep 12 05:48:09.891343 kernel: CPU topo: Max. dies per package: 1 Sep 12 05:48:09.891361 kernel: CPU topo: Max. threads per core: 1 Sep 12 05:48:09.891370 kernel: CPU topo: Num. cores per package: 4 Sep 12 05:48:09.891387 kernel: CPU topo: Num. threads per package: 4 Sep 12 05:48:09.891396 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 05:48:09.891404 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 05:48:09.891411 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 05:48:09.891419 kernel: kvm-guest: setup PV sched yield Sep 12 05:48:09.891430 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 05:48:09.891438 kernel: Booting paravirtualized kernel on KVM Sep 12 05:48:09.891447 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 05:48:09.891457 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 05:48:09.891465 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 05:48:09.891473 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 05:48:09.891481 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 05:48:09.891489 kernel: kvm-guest: PV spinlocks enabled Sep 12 05:48:09.891497 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 05:48:09.891509 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:48:09.891520 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 05:48:09.891528 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 05:48:09.891537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 05:48:09.891545 kernel: Fallback order for Node 0: 0 Sep 12 05:48:09.891553 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 12 05:48:09.891561 kernel: Policy zone: DMA32 Sep 12 05:48:09.891569 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 05:48:09.891579 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 05:48:09.891587 kernel: ftrace: allocating 40123 entries in 157 pages Sep 12 05:48:09.891596 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 05:48:09.891603 kernel: Dynamic Preempt: voluntary Sep 12 05:48:09.891611 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 05:48:09.891621 kernel: rcu: RCU event tracing is enabled. Sep 12 05:48:09.891629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 05:48:09.891637 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 05:48:09.891645 kernel: Rude variant of Tasks RCU enabled. Sep 12 05:48:09.891655 kernel: Tracing variant of Tasks RCU enabled. Sep 12 05:48:09.891669 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 05:48:09.891694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 05:48:09.891708 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:48:09.891733 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:48:09.891749 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:48:09.891758 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 05:48:09.891766 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 05:48:09.891774 kernel: Console: colour dummy device 80x25 Sep 12 05:48:09.891784 kernel: printk: legacy console [ttyS0] enabled Sep 12 05:48:09.891793 kernel: ACPI: Core revision 20240827 Sep 12 05:48:09.891801 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 05:48:09.891809 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 05:48:09.891817 kernel: x2apic enabled Sep 12 05:48:09.891825 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 05:48:09.891833 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 05:48:09.891848 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 05:48:09.891864 kernel: kvm-guest: setup PV IPIs Sep 12 05:48:09.891876 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 05:48:09.891895 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 05:48:09.891904 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 05:48:09.891912 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 05:48:09.891920 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 05:48:09.891928 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 05:48:09.891936 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 05:48:09.891944 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 05:48:09.891952 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 05:48:09.891984 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 05:48:09.891999 kernel: active return thunk: retbleed_return_thunk Sep 12 05:48:09.892017 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 05:48:09.892036 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 05:48:09.892054 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 05:48:09.892073 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 05:48:09.892093 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 05:48:09.892102 kernel: active return thunk: srso_return_thunk Sep 12 05:48:09.892110 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 05:48:09.892129 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 05:48:09.892142 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 05:48:09.892150 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 05:48:09.892158 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 05:48:09.892166 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 05:48:09.892174 kernel: Freeing SMP alternatives memory: 32K Sep 12 05:48:09.892182 kernel: pid_max: default: 32768 minimum: 301 Sep 12 05:48:09.892190 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 05:48:09.892202 kernel: landlock: Up and running. Sep 12 05:48:09.892210 kernel: SELinux: Initializing. Sep 12 05:48:09.892218 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 05:48:09.892243 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 05:48:09.892251 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 05:48:09.892259 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 05:48:09.892267 kernel: ... version: 0 Sep 12 05:48:09.892275 kernel: ... bit width: 48 Sep 12 05:48:09.892283 kernel: ... generic registers: 6 Sep 12 05:48:09.892293 kernel: ... value mask: 0000ffffffffffff Sep 12 05:48:09.892304 kernel: ... max period: 00007fffffffffff Sep 12 05:48:09.892319 kernel: ... fixed-purpose events: 0 Sep 12 05:48:09.892327 kernel: ... event mask: 000000000000003f Sep 12 05:48:09.892334 kernel: signal: max sigframe size: 1776 Sep 12 05:48:09.892342 kernel: rcu: Hierarchical SRCU implementation. Sep 12 05:48:09.892350 kernel: rcu: Max phase no-delay instances is 400. Sep 12 05:48:09.892361 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 05:48:09.892369 kernel: smp: Bringing up secondary CPUs ... Sep 12 05:48:09.892377 kernel: smpboot: x86: Booting SMP configuration: Sep 12 05:48:09.892388 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 05:48:09.892396 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 05:48:09.892404 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 05:48:09.892412 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2432K rwdata, 9988K rodata, 54092K init, 2872K bss, 137200K reserved, 0K cma-reserved) Sep 12 05:48:09.892420 kernel: devtmpfs: initialized Sep 12 05:48:09.892428 kernel: x86/mm: Memory block size: 128MB Sep 12 05:48:09.892436 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 05:48:09.892444 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 05:48:09.892452 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 12 05:48:09.892462 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 05:48:09.892470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 12 05:48:09.892478 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 05:48:09.892486 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 05:48:09.892494 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 05:48:09.892502 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 05:48:09.892510 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 05:48:09.892518 kernel: audit: initializing netlink subsys (disabled) Sep 12 05:48:09.892529 kernel: audit: type=2000 audit(1757656086.852:1): state=initialized audit_enabled=0 res=1 Sep 12 05:48:09.892536 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 05:48:09.892544 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 05:48:09.892552 kernel: cpuidle: using governor menu Sep 12 05:48:09.892560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 05:48:09.892568 kernel: dca service started, version 1.12.1 Sep 12 05:48:09.892576 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 05:48:09.892584 kernel: PCI: Using configuration type 1 for base access Sep 12 05:48:09.892591 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 05:48:09.892602 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 05:48:09.892610 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 05:48:09.892617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 05:48:09.892625 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 05:48:09.892633 kernel: ACPI: Added _OSI(Module Device) Sep 12 05:48:09.892641 kernel: ACPI: Added _OSI(Processor Device) Sep 12 05:48:09.892649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 05:48:09.892657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 05:48:09.892665 kernel: ACPI: Interpreter enabled Sep 12 05:48:09.892674 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 05:48:09.892690 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 05:48:09.892702 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 05:48:09.892710 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 05:48:09.892727 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 05:48:09.892745 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 05:48:09.893054 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 05:48:09.893189 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 05:48:09.893351 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 05:48:09.893363 kernel: PCI host bridge to bus 0000:00 Sep 12 05:48:09.893502 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 05:48:09.893616 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 05:48:09.893740 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 05:48:09.893879 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 05:48:09.893998 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 05:48:09.894116 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 05:48:09.894243 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 05:48:09.894435 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 05:48:09.894582 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 05:48:09.894707 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 05:48:09.894833 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 05:48:09.895007 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 05:48:09.895131 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 05:48:09.895288 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 05:48:09.895421 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 05:48:09.895554 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 05:48:09.895678 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 05:48:09.895842 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 05:48:09.895975 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 05:48:09.896097 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 05:48:09.896219 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 05:48:09.896388 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 05:48:09.896511 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 05:48:09.896633 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 05:48:09.896753 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 05:48:09.896879 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 05:48:09.897018 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 05:48:09.897143 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 05:48:09.897290 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 05:48:09.897431 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 05:48:09.897555 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 05:48:09.897713 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 05:48:09.897843 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 05:48:09.897854 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 05:48:09.897862 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 05:48:09.897870 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 05:48:09.897878 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 05:48:09.897886 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 05:48:09.897894 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 05:48:09.897902 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 05:48:09.897913 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 05:48:09.897921 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 05:48:09.897929 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 05:48:09.897936 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 05:48:09.897944 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 05:48:09.897952 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 05:48:09.897960 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 05:48:09.897971 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 05:48:09.897979 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 05:48:09.897990 kernel: iommu: Default domain type: Translated Sep 12 05:48:09.897998 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 05:48:09.898006 kernel: efivars: Registered efivars operations Sep 12 05:48:09.898014 kernel: PCI: Using ACPI for IRQ routing Sep 12 05:48:09.898022 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 05:48:09.898030 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 05:48:09.898038 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 12 05:48:09.898045 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 12 05:48:09.898053 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 12 05:48:09.898063 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 12 05:48:09.898071 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 12 05:48:09.898079 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 12 05:48:09.898087 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 12 05:48:09.898208 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 05:48:09.898357 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 05:48:09.898478 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 05:48:09.898488 kernel: vgaarb: loaded Sep 12 05:48:09.898500 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 05:48:09.898508 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 05:48:09.898516 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 05:48:09.898524 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 05:48:09.898532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 05:48:09.898540 kernel: pnp: PnP ACPI init Sep 12 05:48:09.898748 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 05:48:09.898767 kernel: pnp: PnP ACPI: found 6 devices Sep 12 05:48:09.898778 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 05:48:09.898786 kernel: NET: Registered PF_INET protocol family Sep 12 05:48:09.898794 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 05:48:09.898803 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 05:48:09.898811 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 05:48:09.898820 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 05:48:09.898828 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 05:48:09.898836 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 05:48:09.898847 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 05:48:09.898855 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 05:48:09.898867 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 05:48:09.898875 kernel: NET: Registered PF_XDP protocol family Sep 12 05:48:09.899001 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 05:48:09.899124 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 05:48:09.899255 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 05:48:09.899404 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 05:48:09.899527 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 05:48:09.899638 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 05:48:09.899776 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 05:48:09.899895 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 05:48:09.899905 kernel: PCI: CLS 0 bytes, default 64 Sep 12 05:48:09.899914 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 05:48:09.899923 kernel: Initialise system trusted keyrings Sep 12 05:48:09.899934 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 05:48:09.899943 kernel: Key type asymmetric registered Sep 12 05:48:09.899951 kernel: Asymmetric key parser 'x509' registered Sep 12 05:48:09.899959 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 05:48:09.899967 kernel: io scheduler mq-deadline registered Sep 12 05:48:09.899976 kernel: io scheduler kyber registered Sep 12 05:48:09.899984 kernel: io scheduler bfq registered Sep 12 05:48:09.899992 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 05:48:09.900004 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 05:48:09.900012 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 05:48:09.900020 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 05:48:09.900029 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 05:48:09.900037 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 05:48:09.900045 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 05:48:09.900054 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 05:48:09.900062 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 05:48:09.900201 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 05:48:09.900350 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 05:48:09.900467 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T05:48:09 UTC (1757656089) Sep 12 05:48:09.900603 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 05:48:09.900615 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 05:48:09.900629 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 12 05:48:09.900637 kernel: efifb: probing for efifb Sep 12 05:48:09.900646 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 05:48:09.900655 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 05:48:09.900665 kernel: efifb: scrolling: redraw Sep 12 05:48:09.900673 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 05:48:09.900682 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 05:48:09.900690 kernel: fb0: EFI VGA frame buffer device Sep 12 05:48:09.900698 kernel: pstore: Using crash dump compression: deflate Sep 12 05:48:09.900707 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 05:48:09.900715 kernel: NET: Registered PF_INET6 protocol family Sep 12 05:48:09.900724 kernel: Segment Routing with IPv6 Sep 12 05:48:09.900732 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 05:48:09.900743 kernel: NET: Registered PF_PACKET protocol family Sep 12 05:48:09.900751 kernel: Key type dns_resolver registered Sep 12 05:48:09.900759 kernel: IPI shorthand broadcast: enabled Sep 12 05:48:09.900767 kernel: sched_clock: Marking stable (3217001821, 217596619)->(3473214937, -38616497) Sep 12 05:48:09.900776 kernel: registered taskstats version 1 Sep 12 05:48:09.900784 kernel: Loading compiled-in X.509 certificates Sep 12 05:48:09.900792 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: c974434132f0296e0aaf9b1358c8dc50eba5c8b9' Sep 12 05:48:09.900801 kernel: Demotion targets for Node 0: null Sep 12 05:48:09.900809 kernel: Key type .fscrypt registered Sep 12 05:48:09.900819 kernel: Key type fscrypt-provisioning registered Sep 12 05:48:09.900827 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 05:48:09.900836 kernel: ima: Allocated hash algorithm: sha1 Sep 12 05:48:09.900844 kernel: ima: No architecture policies found Sep 12 05:48:09.900852 kernel: clk: Disabling unused clocks Sep 12 05:48:09.900861 kernel: Warning: unable to open an initial console. Sep 12 05:48:09.900869 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 12 05:48:09.900878 kernel: Write protecting the kernel read-only data: 24576k Sep 12 05:48:09.900888 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 12 05:48:09.900896 kernel: Run /init as init process Sep 12 05:48:09.900904 kernel: with arguments: Sep 12 05:48:09.900912 kernel: /init Sep 12 05:48:09.900921 kernel: with environment: Sep 12 05:48:09.900929 kernel: HOME=/ Sep 12 05:48:09.900937 kernel: TERM=linux Sep 12 05:48:09.900945 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 05:48:09.900958 systemd[1]: Successfully made /usr/ read-only. Sep 12 05:48:09.900972 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:48:09.900982 systemd[1]: Detected virtualization kvm. Sep 12 05:48:09.900990 systemd[1]: Detected architecture x86-64. Sep 12 05:48:09.900999 systemd[1]: Running in initrd. Sep 12 05:48:09.901007 systemd[1]: No hostname configured, using default hostname. Sep 12 05:48:09.901016 systemd[1]: Hostname set to . Sep 12 05:48:09.901025 systemd[1]: Initializing machine ID from VM UUID. Sep 12 05:48:09.901033 systemd[1]: Queued start job for default target initrd.target. Sep 12 05:48:09.901044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:48:09.901053 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:48:09.901066 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 05:48:09.901075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:48:09.901083 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 05:48:09.901093 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 05:48:09.901105 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 05:48:09.901114 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 05:48:09.901123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:48:09.901132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:48:09.901141 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:48:09.901150 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:48:09.901158 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:48:09.901167 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:48:09.901176 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:48:09.901187 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:48:09.901196 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 05:48:09.901205 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 05:48:09.901214 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:48:09.901223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:48:09.901245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:48:09.901254 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:48:09.901263 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 05:48:09.901274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:48:09.901283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 05:48:09.901292 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 05:48:09.901301 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 05:48:09.901316 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:48:09.901325 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:48:09.901334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:48:09.901343 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 05:48:09.901354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:48:09.901363 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 05:48:09.901409 systemd-journald[220]: Collecting audit messages is disabled. Sep 12 05:48:09.901433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 05:48:09.901442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:48:09.901452 systemd-journald[220]: Journal started Sep 12 05:48:09.901473 systemd-journald[220]: Runtime Journal (/run/log/journal/7d93e6318e9142ca99465db870742d62) is 6M, max 48.4M, 42.4M free. Sep 12 05:48:09.891420 systemd-modules-load[223]: Inserted module 'overlay' Sep 12 05:48:09.906267 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:48:09.909677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 05:48:09.912965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:48:09.915647 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:48:09.921274 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 05:48:09.923251 kernel: Bridge firewalling registered Sep 12 05:48:09.923238 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 12 05:48:09.925887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:48:09.926185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:48:09.928570 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:48:09.938406 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 05:48:09.939890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:48:09.942496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:48:09.944128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:48:09.946507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:48:09.957394 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:48:09.959626 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 05:48:09.979188 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:48:09.993775 systemd-resolved[254]: Positive Trust Anchors: Sep 12 05:48:09.993790 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:48:09.993819 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:48:09.996359 systemd-resolved[254]: Defaulting to hostname 'linux'. Sep 12 05:48:09.997644 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:48:10.003472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:48:10.093255 kernel: SCSI subsystem initialized Sep 12 05:48:10.102260 kernel: Loading iSCSI transport class v2.0-870. Sep 12 05:48:10.112265 kernel: iscsi: registered transport (tcp) Sep 12 05:48:10.133627 kernel: iscsi: registered transport (qla4xxx) Sep 12 05:48:10.133711 kernel: QLogic iSCSI HBA Driver Sep 12 05:48:10.156459 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:48:10.178002 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:48:10.179110 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:48:10.247288 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 05:48:10.249367 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 05:48:10.312266 kernel: raid6: avx2x4 gen() 30005 MB/s Sep 12 05:48:10.329252 kernel: raid6: avx2x2 gen() 30171 MB/s Sep 12 05:48:10.346302 kernel: raid6: avx2x1 gen() 25494 MB/s Sep 12 05:48:10.346327 kernel: raid6: using algorithm avx2x2 gen() 30171 MB/s Sep 12 05:48:10.364381 kernel: raid6: .... xor() 18442 MB/s, rmw enabled Sep 12 05:48:10.364467 kernel: raid6: using avx2x2 recovery algorithm Sep 12 05:48:10.387258 kernel: xor: automatically using best checksumming function avx Sep 12 05:48:10.622280 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 05:48:10.632493 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:48:10.634852 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:48:10.666592 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 12 05:48:10.672462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:48:10.676187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 05:48:10.710687 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Sep 12 05:48:10.742047 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:48:10.743690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:48:10.827951 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:48:10.829198 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 05:48:10.879265 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 05:48:10.883259 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 05:48:10.892104 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 05:48:10.892120 kernel: AES CTR mode by8 optimization enabled Sep 12 05:48:10.895530 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 05:48:10.906033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 05:48:10.906082 kernel: GPT:9289727 != 19775487 Sep 12 05:48:10.906093 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 05:48:10.906112 kernel: GPT:9289727 != 19775487 Sep 12 05:48:10.906122 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 05:48:10.906133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:48:10.921665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:48:10.921860 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:48:10.923478 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:48:10.928740 kernel: libata version 3.00 loaded. Sep 12 05:48:10.929211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:48:10.932152 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:48:10.951258 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 05:48:10.951474 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 05:48:10.955579 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 05:48:10.955755 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 05:48:10.955899 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 05:48:10.964782 kernel: scsi host0: ahci Sep 12 05:48:10.965377 kernel: scsi host1: ahci Sep 12 05:48:10.966251 kernel: scsi host2: ahci Sep 12 05:48:10.968298 kernel: scsi host3: ahci Sep 12 05:48:10.968487 kernel: scsi host4: ahci Sep 12 05:48:10.970341 kernel: scsi host5: ahci Sep 12 05:48:10.970496 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 05:48:10.968465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 05:48:10.978120 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 05:48:10.978136 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 05:48:10.978147 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 05:48:10.978157 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 05:48:10.978168 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 05:48:10.974637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:48:10.991924 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 05:48:10.999739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 05:48:11.000939 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 05:48:11.011839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 05:48:11.013988 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 05:48:11.036953 disk-uuid[635]: Primary Header is updated. Sep 12 05:48:11.036953 disk-uuid[635]: Secondary Entries is updated. Sep 12 05:48:11.036953 disk-uuid[635]: Secondary Header is updated. Sep 12 05:48:11.041249 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:48:11.045270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:48:11.284263 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 05:48:11.284340 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 05:48:11.285928 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 05:48:11.285949 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 05:48:11.286250 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 05:48:11.287266 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 05:48:11.288258 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 05:48:11.288278 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 05:48:11.288597 kernel: ata3.00: applying bridge limits Sep 12 05:48:11.289790 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 05:48:11.289801 kernel: ata3.00: configured for UDMA/100 Sep 12 05:48:11.291260 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 05:48:11.335833 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 05:48:11.336176 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 05:48:11.356321 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 05:48:11.805963 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 05:48:11.807979 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:48:11.810002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:48:11.811427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:48:11.815067 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 05:48:11.844241 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:48:12.047536 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:48:12.047596 disk-uuid[636]: The operation has completed successfully. Sep 12 05:48:12.074770 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 05:48:12.074922 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 05:48:12.115851 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 05:48:12.145785 sh[665]: Success Sep 12 05:48:12.163336 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 05:48:12.163379 kernel: device-mapper: uevent: version 1.0.3 Sep 12 05:48:12.164404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 05:48:12.174286 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 05:48:12.206499 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 05:48:12.210599 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 05:48:12.223937 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 05:48:12.230264 kernel: BTRFS: device fsid 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (677) Sep 12 05:48:12.232341 kernel: BTRFS info (device dm-0): first mount of filesystem 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f Sep 12 05:48:12.232375 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:48:12.238152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 05:48:12.238178 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 05:48:12.240120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 05:48:12.240854 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:48:12.243128 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 05:48:12.244001 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 05:48:12.248414 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 05:48:12.281169 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 12 05:48:12.281226 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:48:12.281266 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:48:12.285571 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:48:12.285601 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:48:12.291279 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:48:12.291658 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 05:48:12.293280 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 05:48:12.428082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:48:12.431583 ignition[751]: Ignition 2.22.0 Sep 12 05:48:12.431609 ignition[751]: Stage: fetch-offline Sep 12 05:48:12.431642 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:48:12.431652 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:48:12.431748 ignition[751]: parsed url from cmdline: "" Sep 12 05:48:12.431752 ignition[751]: no config URL provided Sep 12 05:48:12.431757 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 05:48:12.431766 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 12 05:48:12.431790 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 12 05:48:12.431795 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 05:48:12.438203 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:48:12.446587 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 12 05:48:12.446624 ignition[751]: QEMU firmware config was not found. Ignoring... Sep 12 05:48:12.482207 systemd-networkd[858]: lo: Link UP Sep 12 05:48:12.482219 systemd-networkd[858]: lo: Gained carrier Sep 12 05:48:12.483827 systemd-networkd[858]: Enumeration completed Sep 12 05:48:12.483923 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:48:12.484211 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:48:12.484216 systemd-networkd[858]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 05:48:12.485102 systemd-networkd[858]: eth0: Link UP Sep 12 05:48:12.485294 systemd-networkd[858]: eth0: Gained carrier Sep 12 05:48:12.485303 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:48:12.485753 systemd[1]: Reached target network.target - Network. Sep 12 05:48:12.498895 ignition[751]: parsing config with SHA512: 0bb7a9298d04b0b891da7c07ebe81e6349edc10d4eaa3b58b8109627aa218d21f32e97404c5a27128ec455f34500805d9bd95b770cd8c8f55663301b35f3acee Sep 12 05:48:12.504534 unknown[751]: fetched base config from "system" Sep 12 05:48:12.504547 unknown[751]: fetched user config from "qemu" Sep 12 05:48:12.504933 ignition[751]: fetch-offline: fetch-offline passed Sep 12 05:48:12.505305 systemd-networkd[858]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 05:48:12.504987 ignition[751]: Ignition finished successfully Sep 12 05:48:12.508351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:48:12.509419 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 05:48:12.513895 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 05:48:12.572367 ignition[863]: Ignition 2.22.0 Sep 12 05:48:12.572382 ignition[863]: Stage: kargs Sep 12 05:48:12.572527 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:48:12.572539 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:48:12.573313 ignition[863]: kargs: kargs passed Sep 12 05:48:12.573372 ignition[863]: Ignition finished successfully Sep 12 05:48:12.580937 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 05:48:12.583908 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 05:48:12.632464 ignition[871]: Ignition 2.22.0 Sep 12 05:48:12.632478 ignition[871]: Stage: disks Sep 12 05:48:12.632605 ignition[871]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:48:12.632615 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:48:12.633346 ignition[871]: disks: disks passed Sep 12 05:48:12.633396 ignition[871]: Ignition finished successfully Sep 12 05:48:12.639719 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 05:48:12.641831 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 05:48:12.643988 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 05:48:12.646461 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:48:12.648403 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:48:12.650333 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:48:12.653072 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 05:48:12.694568 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 05:48:12.702793 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 05:48:12.706770 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 05:48:12.824258 kernel: EXT4-fs (vda9): mounted filesystem 2b8062f9-897a-46cb-bde4-2b62ba4cc712 r/w with ordered data mode. Quota mode: none. Sep 12 05:48:12.824719 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 05:48:12.825596 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 05:48:12.829253 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:48:12.831205 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 05:48:12.833332 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 05:48:12.833397 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 05:48:12.833429 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:48:12.847514 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 05:48:12.848960 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 05:48:12.874266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Sep 12 05:48:12.874334 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:48:12.876259 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:48:12.880271 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:48:12.880332 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:48:12.883613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:48:12.903956 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 05:48:12.909026 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 12 05:48:12.912975 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 05:48:12.916784 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 05:48:13.010452 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 05:48:13.012688 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 05:48:13.014591 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 05:48:13.036261 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:48:13.048395 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 05:48:13.096781 ignition[1003]: INFO : Ignition 2.22.0 Sep 12 05:48:13.096781 ignition[1003]: INFO : Stage: mount Sep 12 05:48:13.098760 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:48:13.098760 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:48:13.101864 ignition[1003]: INFO : mount: mount passed Sep 12 05:48:13.102798 ignition[1003]: INFO : Ignition finished successfully Sep 12 05:48:13.106615 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 05:48:13.108942 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 05:48:13.230505 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 05:48:13.232420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:48:13.262672 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 12 05:48:13.262722 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:48:13.262735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:48:13.266872 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:48:13.266895 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:48:13.268609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:48:13.385496 ignition[1032]: INFO : Ignition 2.22.0 Sep 12 05:48:13.385496 ignition[1032]: INFO : Stage: files Sep 12 05:48:13.387806 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:48:13.387806 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:48:13.387806 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 12 05:48:13.387806 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 05:48:13.387806 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 05:48:13.395994 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 05:48:13.395994 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 05:48:13.395994 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 05:48:13.395994 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 05:48:13.395994 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 05:48:13.390957 unknown[1032]: wrote ssh authorized keys file for user: core Sep 12 05:48:13.468384 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 05:48:13.851669 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 05:48:13.853978 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 05:48:13.853978 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 05:48:13.877415 systemd-networkd[858]: eth0: Gained IPv6LL Sep 12 05:48:14.011855 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 05:48:14.627438 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 05:48:14.627438 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 05:48:14.631755 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 05:48:14.631755 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:48:14.631755 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:48:14.631755 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:48:14.631755 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:48:14.631755 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:48:14.631755 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:48:14.644909 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:48:14.644909 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:48:14.644909 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:48:14.651394 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:48:14.654072 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:48:14.654072 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 05:48:14.927102 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 05:48:15.405788 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:48:15.405788 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 05:48:15.409854 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:48:15.540506 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:48:15.540506 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 05:48:15.540506 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 05:48:15.546082 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:48:15.546082 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:48:15.546082 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 05:48:15.546082 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 05:48:15.570049 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:48:15.577479 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:48:15.579056 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 05:48:15.579056 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 05:48:15.579056 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 05:48:15.579056 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:48:15.579056 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:48:15.579056 ignition[1032]: INFO : files: files passed Sep 12 05:48:15.579056 ignition[1032]: INFO : Ignition finished successfully Sep 12 05:48:15.584770 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 05:48:15.586516 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 05:48:15.590875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 05:48:15.603798 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 05:48:15.603940 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 05:48:15.605996 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 05:48:15.610004 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:48:15.611757 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:48:15.613298 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:48:15.612968 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:48:15.613546 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 05:48:15.618208 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 05:48:15.654110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 05:48:15.654285 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 05:48:15.655405 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 05:48:15.655670 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 05:48:15.656023 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 05:48:15.662422 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 05:48:15.680644 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:48:15.683082 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 05:48:15.703723 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:48:15.703881 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:48:15.706012 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 05:48:15.708117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 05:48:15.708251 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:48:15.712726 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 05:48:15.712863 systemd[1]: Stopped target basic.target - Basic System. Sep 12 05:48:15.714703 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 05:48:15.715015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:48:15.715518 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 05:48:15.715837 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:48:15.716168 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 05:48:15.716652 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:48:15.716982 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 05:48:15.717323 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 05:48:15.717796 systemd[1]: Stopped target swap.target - Swaps. Sep 12 05:48:15.718093 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 05:48:15.718217 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:48:15.734314 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:48:15.735405 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:48:15.735679 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 05:48:15.739355 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:48:15.740604 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 05:48:15.740719 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 05:48:15.741868 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 05:48:15.741975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:48:15.744740 systemd[1]: Stopped target paths.target - Path Units. Sep 12 05:48:15.744964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 05:48:15.751310 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:48:15.751462 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 05:48:15.751779 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 05:48:15.752098 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 05:48:15.752196 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:48:15.756999 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 05:48:15.757087 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:48:15.759547 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 05:48:15.759660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:48:15.760463 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 05:48:15.760564 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 05:48:15.765156 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 05:48:15.766275 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 05:48:15.766391 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:48:15.768704 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 05:48:15.771678 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 05:48:15.771796 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:48:15.774012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 05:48:15.774114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:48:15.782529 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 05:48:15.782643 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 05:48:15.798322 ignition[1087]: INFO : Ignition 2.22.0 Sep 12 05:48:15.798322 ignition[1087]: INFO : Stage: umount Sep 12 05:48:15.799983 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:48:15.799983 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:48:15.799983 ignition[1087]: INFO : umount: umount passed Sep 12 05:48:15.799983 ignition[1087]: INFO : Ignition finished successfully Sep 12 05:48:15.799866 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 05:48:15.803481 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 05:48:15.803642 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 05:48:15.804967 systemd[1]: Stopped target network.target - Network. Sep 12 05:48:15.806494 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 05:48:15.806551 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 05:48:15.807455 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 05:48:15.807504 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 05:48:15.807770 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 05:48:15.807817 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 05:48:15.808095 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 05:48:15.808135 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 05:48:15.808690 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 05:48:15.814221 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 05:48:15.821500 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 05:48:15.821637 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 05:48:15.826811 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 05:48:15.827122 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 05:48:15.827276 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 05:48:15.830322 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 05:48:15.830970 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 05:48:15.832773 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 05:48:15.832821 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:48:15.835868 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 05:48:15.836888 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 05:48:15.836943 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:48:15.838991 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 05:48:15.839040 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:48:15.841107 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 05:48:15.841166 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 05:48:15.842105 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 05:48:15.842160 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:48:15.846346 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:48:15.848100 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 05:48:15.848177 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:48:15.862717 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 05:48:15.862883 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 05:48:15.866181 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 05:48:15.866388 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:48:15.869730 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 05:48:15.869787 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 05:48:15.871745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 05:48:15.871782 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:48:15.872753 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 05:48:15.872805 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:48:15.874978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 05:48:15.875029 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 05:48:15.878969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 05:48:15.879027 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:48:15.882289 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 05:48:15.882893 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 05:48:15.882951 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:48:15.887161 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 05:48:15.887210 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:48:15.891441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:48:15.891493 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:48:15.895817 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 05:48:15.895883 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 05:48:15.895948 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:48:15.911552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 05:48:15.911675 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 05:48:15.995178 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 05:48:15.995365 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 05:48:15.997687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 05:48:15.998331 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 05:48:15.998411 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 05:48:16.003358 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 05:48:16.035826 systemd[1]: Switching root. Sep 12 05:48:16.081298 systemd-journald[220]: Journal stopped Sep 12 05:48:17.634007 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 12 05:48:17.634087 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 05:48:17.634120 kernel: SELinux: policy capability open_perms=1 Sep 12 05:48:17.634132 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 05:48:17.634144 kernel: SELinux: policy capability always_check_network=0 Sep 12 05:48:17.634155 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 05:48:17.634168 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 05:48:17.634182 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 05:48:17.634194 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 05:48:17.634211 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 05:48:17.634222 kernel: audit: type=1403 audit(1757656096.712:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 05:48:17.634255 systemd[1]: Successfully loaded SELinux policy in 70.385ms. Sep 12 05:48:17.634281 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.127ms. Sep 12 05:48:17.634295 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:48:17.634311 systemd[1]: Detected virtualization kvm. Sep 12 05:48:17.634326 systemd[1]: Detected architecture x86-64. Sep 12 05:48:17.634345 systemd[1]: Detected first boot. Sep 12 05:48:17.634359 systemd[1]: Initializing machine ID from VM UUID. Sep 12 05:48:17.634372 zram_generator::config[1132]: No configuration found. Sep 12 05:48:17.634385 kernel: Guest personality initialized and is inactive Sep 12 05:48:17.634396 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 05:48:17.634408 kernel: Initialized host personality Sep 12 05:48:17.634419 kernel: NET: Registered PF_VSOCK protocol family Sep 12 05:48:17.634431 systemd[1]: Populated /etc with preset unit settings. Sep 12 05:48:17.634444 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 05:48:17.634458 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 05:48:17.634471 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 05:48:17.634483 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 05:48:17.634495 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 05:48:17.634507 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 05:48:17.634519 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 05:48:17.634532 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 05:48:17.634547 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 05:48:17.634565 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 05:48:17.634578 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 05:48:17.634590 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 05:48:17.634602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:48:17.634615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:48:17.634628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 05:48:17.634640 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 05:48:17.634652 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 05:48:17.634667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:48:17.634679 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 05:48:17.634692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:48:17.634707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:48:17.634723 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 05:48:17.634735 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 05:48:17.634748 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 05:48:17.634760 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 05:48:17.634775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:48:17.634793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:48:17.634805 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:48:17.634817 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:48:17.634829 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 05:48:17.634841 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 05:48:17.634854 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 05:48:17.634866 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:48:17.634880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:48:17.634893 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:48:17.634907 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 05:48:17.634919 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 05:48:17.634934 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 05:48:17.634947 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 05:48:17.634962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:17.634982 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 05:48:17.634997 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 05:48:17.635009 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 05:48:17.635022 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 05:48:17.635036 systemd[1]: Reached target machines.target - Containers. Sep 12 05:48:17.635049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 05:48:17.635061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:48:17.635073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:48:17.635086 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 05:48:17.635107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:48:17.635119 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:48:17.635131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:48:17.635148 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 05:48:17.635160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:48:17.635173 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 05:48:17.635185 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 05:48:17.635197 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 05:48:17.635210 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 05:48:17.635222 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 05:48:17.635254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:48:17.635273 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:48:17.635286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:48:17.635298 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:48:17.635310 kernel: loop: module loaded Sep 12 05:48:17.635322 kernel: fuse: init (API version 7.41) Sep 12 05:48:17.635333 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 05:48:17.635345 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 05:48:17.635358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:48:17.635372 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 05:48:17.635384 systemd[1]: Stopped verity-setup.service. Sep 12 05:48:17.635397 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:17.635409 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 05:48:17.635422 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 05:48:17.635435 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 05:48:17.635452 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 05:48:17.635464 kernel: ACPI: bus type drm_connector registered Sep 12 05:48:17.635476 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 05:48:17.635487 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 05:48:17.635499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:48:17.635512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 05:48:17.635529 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 05:48:17.635548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:48:17.635562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:48:17.635574 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:48:17.635611 systemd-journald[1200]: Collecting audit messages is disabled. Sep 12 05:48:17.635634 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:48:17.635649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:48:17.635667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:48:17.635680 systemd-journald[1200]: Journal started Sep 12 05:48:17.635702 systemd-journald[1200]: Runtime Journal (/run/log/journal/7d93e6318e9142ca99465db870742d62) is 6M, max 48.4M, 42.4M free. Sep 12 05:48:17.352745 systemd[1]: Queued start job for default target multi-user.target. Sep 12 05:48:17.373338 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 05:48:17.373806 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 05:48:17.639255 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:48:17.640598 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 05:48:17.642153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 05:48:17.642480 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 05:48:17.644117 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:48:17.644436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:48:17.646283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:48:17.648045 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:48:17.649966 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 05:48:17.651845 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 05:48:17.671169 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:48:17.674458 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 05:48:17.677265 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 05:48:17.678593 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 05:48:17.678633 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:48:17.681166 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 05:48:17.688401 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 05:48:17.690803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:48:17.693390 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 05:48:17.697275 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 05:48:17.698671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:48:17.702587 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 05:48:17.702778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:48:17.705248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:48:17.713425 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 05:48:17.717920 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 05:48:17.723602 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 05:48:17.725096 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 05:48:17.744496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:48:17.754440 kernel: loop0: detected capacity change from 0 to 229808 Sep 12 05:48:17.754798 systemd-journald[1200]: Time spent on flushing to /var/log/journal/7d93e6318e9142ca99465db870742d62 is 16.335ms for 1077 entries. Sep 12 05:48:17.754798 systemd-journald[1200]: System Journal (/var/log/journal/7d93e6318e9142ca99465db870742d62) is 8M, max 195.6M, 187.6M free. Sep 12 05:48:17.871108 systemd-journald[1200]: Received client request to flush runtime journal. Sep 12 05:48:17.871167 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 05:48:17.754210 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 05:48:17.757729 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 05:48:17.839911 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 05:48:17.873200 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 05:48:17.878332 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:48:17.886169 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 05:48:17.888183 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 05:48:17.891550 kernel: loop1: detected capacity change from 0 to 110984 Sep 12 05:48:17.892490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:48:17.922658 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 12 05:48:17.922677 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 12 05:48:17.924317 kernel: loop2: detected capacity change from 0 to 128016 Sep 12 05:48:17.928147 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:48:17.967262 kernel: loop3: detected capacity change from 0 to 229808 Sep 12 05:48:17.994267 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 05:48:18.010295 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 05:48:18.026670 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 05:48:18.027390 (sd-merge)[1275]: Merged extensions into '/usr'. Sep 12 05:48:18.033862 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 05:48:18.033882 systemd[1]: Reloading... Sep 12 05:48:18.124281 zram_generator::config[1301]: No configuration found. Sep 12 05:48:18.223259 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 05:48:18.326125 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 05:48:18.326455 systemd[1]: Reloading finished in 292 ms. Sep 12 05:48:18.362623 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 05:48:18.368601 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 05:48:18.384323 systemd[1]: Starting ensure-sysext.service... Sep 12 05:48:18.386763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:48:18.404450 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Sep 12 05:48:18.404467 systemd[1]: Reloading... Sep 12 05:48:18.444656 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 05:48:18.445769 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 05:48:18.446355 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 05:48:18.447037 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 05:48:18.448575 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 05:48:18.449756 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 12 05:48:18.449943 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 12 05:48:18.456944 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:48:18.456956 systemd-tmpfiles[1339]: Skipping /boot Sep 12 05:48:18.467706 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:48:18.467775 systemd-tmpfiles[1339]: Skipping /boot Sep 12 05:48:18.468314 zram_generator::config[1369]: No configuration found. Sep 12 05:48:18.641481 systemd[1]: Reloading finished in 236 ms. Sep 12 05:48:18.666169 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 05:48:18.685289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:48:18.695406 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:48:18.698519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 05:48:18.720118 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 05:48:18.724072 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:48:18.727543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:48:18.733511 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 05:48:18.739625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:18.739869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:48:18.743072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:48:18.747487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:48:18.749884 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:48:18.751214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:48:18.751398 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:48:18.757421 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 05:48:18.758667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:18.761385 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 05:48:18.763644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:48:18.763873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:48:18.765778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:48:18.766028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:48:18.767935 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:48:18.768279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:48:18.782871 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 05:48:18.787760 augenrules[1439]: No rules Sep 12 05:48:18.787796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:18.788038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:48:18.790525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:48:18.793496 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:48:18.796817 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 12 05:48:18.796911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:48:18.804179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:48:18.805592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:48:18.805700 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:48:18.807516 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 05:48:18.808700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:18.811995 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:48:18.812405 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:48:18.814875 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 05:48:18.816919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:48:18.817157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:48:18.819092 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:48:18.819317 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:48:18.820902 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 05:48:18.822641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:48:18.823434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:48:18.825112 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:48:18.827759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:48:18.829273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:48:18.838678 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 05:48:18.855874 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:48:18.857326 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:48:18.857637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:48:18.857771 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 05:48:18.859467 systemd[1]: Finished ensure-sysext.service. Sep 12 05:48:18.868403 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 05:48:18.944915 systemd-resolved[1408]: Positive Trust Anchors: Sep 12 05:48:18.944930 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:48:18.944960 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:48:18.951511 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 12 05:48:18.952946 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:48:18.954259 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:48:18.971139 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 05:48:19.003213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 05:48:19.010927 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 05:48:19.016494 systemd-networkd[1487]: lo: Link UP Sep 12 05:48:19.016504 systemd-networkd[1487]: lo: Gained carrier Sep 12 05:48:19.018189 systemd-networkd[1487]: Enumeration completed Sep 12 05:48:19.018312 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:48:19.019580 systemd[1]: Reached target network.target - Network. Sep 12 05:48:19.020914 systemd-networkd[1487]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:48:19.020927 systemd-networkd[1487]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 05:48:19.021592 systemd-networkd[1487]: eth0: Link UP Sep 12 05:48:19.021777 systemd-networkd[1487]: eth0: Gained carrier Sep 12 05:48:19.021800 systemd-networkd[1487]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:48:19.023459 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 05:48:19.026874 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 05:48:19.031685 systemd-networkd[1487]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 05:48:19.036323 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 05:48:19.044431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 05:48:19.053058 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 05:48:19.649163 systemd-resolved[1408]: Clock change detected. Flushing caches. Sep 12 05:48:19.649348 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 05:48:19.649401 systemd-timesyncd[1489]: Initial clock synchronization to Fri 2025-09-12 05:48:19.649110 UTC. Sep 12 05:48:19.653453 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:48:19.654993 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 05:48:19.656280 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 05:48:19.657554 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 05:48:19.658707 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 05:48:19.660262 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 05:48:19.660297 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:48:19.662177 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 05:48:19.663427 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 05:48:19.665035 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 05:48:19.668198 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 05:48:19.669029 kernel: ACPI: button: Power Button [PWRF] Sep 12 05:48:19.669901 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:48:19.671780 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 05:48:19.674688 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 05:48:19.677281 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 05:48:19.690394 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 05:48:19.690601 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 05:48:19.683303 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 05:48:19.684725 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 05:48:19.686033 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 05:48:19.699018 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 05:48:19.708169 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 05:48:19.710300 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 05:48:19.711842 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 05:48:19.714389 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:48:19.715570 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:48:19.716606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:48:19.716639 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:48:19.717841 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 05:48:19.722247 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 05:48:19.726253 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 05:48:19.738600 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 05:48:19.740973 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 05:48:19.742116 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 05:48:19.744249 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 05:48:19.746312 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 05:48:19.751975 jq[1528]: false Sep 12 05:48:19.752109 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 05:48:19.754846 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 05:48:19.758609 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 12 05:48:19.758631 oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 12 05:48:19.759990 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 05:48:19.766200 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 05:48:19.767974 extend-filesystems[1529]: Found /dev/vda6 Sep 12 05:48:19.768162 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 05:48:19.774268 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 05:48:19.777419 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 12 05:48:19.777419 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:48:19.777419 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 12 05:48:19.774940 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 05:48:19.774714 oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 12 05:48:19.777444 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 05:48:19.774737 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:48:19.774806 oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 12 05:48:19.781764 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 05:48:19.783458 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 05:48:19.783711 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 05:48:19.784798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 05:48:19.785177 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 05:48:19.791291 extend-filesystems[1529]: Found /dev/vda9 Sep 12 05:48:19.793469 oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 12 05:48:19.793633 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 12 05:48:19.793633 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:48:19.793483 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:48:19.797047 extend-filesystems[1529]: Checking size of /dev/vda9 Sep 12 05:48:19.801297 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 05:48:19.801561 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 05:48:19.806796 jq[1543]: true Sep 12 05:48:19.813332 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 05:48:19.826565 extend-filesystems[1529]: Resized partition /dev/vda9 Sep 12 05:48:19.815317 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 05:48:19.829983 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 05:48:19.830144 extend-filesystems[1571]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 05:48:19.815488 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 05:48:19.832542 jq[1568]: true Sep 12 05:48:19.835293 update_engine[1542]: I20250912 05:48:19.833891 1542 main.cc:92] Flatcar Update Engine starting Sep 12 05:48:19.834149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:48:19.846450 dbus-daemon[1526]: [system] SELinux support is enabled Sep 12 05:48:19.846631 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 05:48:19.853037 update_engine[1542]: I20250912 05:48:19.852859 1542 update_check_scheduler.cc:74] Next update check in 7m19s Sep 12 05:48:19.856141 tar[1545]: linux-amd64/LICENSE Sep 12 05:48:19.858253 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 05:48:19.858480 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 05:48:19.860274 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 05:48:19.865246 tar[1545]: linux-amd64/helm Sep 12 05:48:19.860487 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 05:48:19.862289 systemd[1]: Started update-engine.service - Update Engine. Sep 12 05:48:19.867510 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 05:48:19.927228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:48:19.927560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:48:19.935705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:48:19.941039 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 05:48:19.969292 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 05:48:19.969292 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 05:48:19.969292 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 05:48:19.972970 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Sep 12 05:48:19.971733 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 05:48:19.974195 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 05:48:19.995121 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 05:48:19.995152 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 05:48:19.995426 systemd-logind[1540]: New seat seat0. Sep 12 05:48:20.014069 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Sep 12 05:48:20.022582 kernel: kvm_amd: TSC scaling supported Sep 12 05:48:20.022624 kernel: kvm_amd: Nested Virtualization enabled Sep 12 05:48:20.022640 kernel: kvm_amd: Nested Paging enabled Sep 12 05:48:20.023105 kernel: kvm_amd: LBR virtualization supported Sep 12 05:48:20.024290 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 05:48:20.024313 kernel: kvm_amd: Virtual GIF supported Sep 12 05:48:20.054273 locksmithd[1583]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 05:48:20.073518 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 05:48:20.074023 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 05:48:20.078838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:48:20.091908 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 05:48:20.118666 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 05:48:20.128928 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 05:48:20.132284 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 05:48:20.144112 kernel: EDAC MC: Ver: 3.0.0 Sep 12 05:48:20.155396 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 05:48:20.155815 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 05:48:20.159473 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 05:48:20.188279 containerd[1558]: time="2025-09-12T05:48:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 05:48:20.191016 containerd[1558]: time="2025-09-12T05:48:20.189258790Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 05:48:20.277568 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 05:48:20.283075 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 05:48:20.286566 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 05:48:20.289254 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 05:48:20.300954 containerd[1558]: time="2025-09-12T05:48:20.300884438Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.611µs" Sep 12 05:48:20.300954 containerd[1558]: time="2025-09-12T05:48:20.300938359Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 05:48:20.301063 containerd[1558]: time="2025-09-12T05:48:20.300965700Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 05:48:20.301292 containerd[1558]: time="2025-09-12T05:48:20.301258850Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 05:48:20.301332 containerd[1558]: time="2025-09-12T05:48:20.301291812Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 05:48:20.301358 containerd[1558]: time="2025-09-12T05:48:20.301335093Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:48:20.301451 containerd[1558]: time="2025-09-12T05:48:20.301415995Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:48:20.301451 containerd[1558]: time="2025-09-12T05:48:20.301437244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:48:20.301891 containerd[1558]: time="2025-09-12T05:48:20.301837164Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:48:20.301891 containerd[1558]: time="2025-09-12T05:48:20.301866409Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:48:20.301891 containerd[1558]: time="2025-09-12T05:48:20.301889372Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:48:20.302439 containerd[1558]: time="2025-09-12T05:48:20.301901185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 05:48:20.302439 containerd[1558]: time="2025-09-12T05:48:20.302041598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 05:48:20.302439 containerd[1558]: time="2025-09-12T05:48:20.302323887Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:48:20.302439 containerd[1558]: time="2025-09-12T05:48:20.302358051Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:48:20.302439 containerd[1558]: time="2025-09-12T05:48:20.302370294Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 05:48:20.302439 containerd[1558]: time="2025-09-12T05:48:20.302414187Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 05:48:20.302696 containerd[1558]: time="2025-09-12T05:48:20.302660739Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 05:48:20.302765 containerd[1558]: time="2025-09-12T05:48:20.302744987Z" level=info msg="metadata content store policy set" policy=shared Sep 12 05:48:20.309846 containerd[1558]: time="2025-09-12T05:48:20.309742279Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 05:48:20.309846 containerd[1558]: time="2025-09-12T05:48:20.309807211Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 05:48:20.309846 containerd[1558]: time="2025-09-12T05:48:20.309824403Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.309914062Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.309934670Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.309948867Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.309964927Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.309978542Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.309991206Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.310024459Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 05:48:20.310046 containerd[1558]: time="2025-09-12T05:48:20.310037343Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 05:48:20.310199 containerd[1558]: time="2025-09-12T05:48:20.310052902Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 05:48:20.310221 containerd[1558]: time="2025-09-12T05:48:20.310202212Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 05:48:20.310242 containerd[1558]: time="2025-09-12T05:48:20.310223482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 05:48:20.310262 containerd[1558]: time="2025-09-12T05:48:20.310241305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 05:48:20.310287 containerd[1558]: time="2025-09-12T05:48:20.310262956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 05:48:20.310287 containerd[1558]: time="2025-09-12T05:48:20.310278926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 05:48:20.310325 containerd[1558]: time="2025-09-12T05:48:20.310291880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 05:48:20.310325 containerd[1558]: time="2025-09-12T05:48:20.310305716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 05:48:20.310325 containerd[1558]: time="2025-09-12T05:48:20.310318831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 05:48:20.310390 containerd[1558]: time="2025-09-12T05:48:20.310332777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 05:48:20.310390 containerd[1558]: time="2025-09-12T05:48:20.310345741Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 05:48:20.310390 containerd[1558]: time="2025-09-12T05:48:20.310358385Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 05:48:20.310454 containerd[1558]: time="2025-09-12T05:48:20.310433626Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 05:48:20.310476 containerd[1558]: time="2025-09-12T05:48:20.310454154Z" level=info msg="Start snapshots syncer" Sep 12 05:48:20.310522 containerd[1558]: time="2025-09-12T05:48:20.310502665Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 05:48:20.310960 containerd[1558]: time="2025-09-12T05:48:20.310893929Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 05:48:20.311082 containerd[1558]: time="2025-09-12T05:48:20.310985140Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 05:48:20.311105 containerd[1558]: time="2025-09-12T05:48:20.311088063Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 05:48:20.311395 containerd[1558]: time="2025-09-12T05:48:20.311361466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 05:48:20.311431 containerd[1558]: time="2025-09-12T05:48:20.311395510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 05:48:20.311431 containerd[1558]: time="2025-09-12T05:48:20.311410378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 05:48:20.311431 containerd[1558]: time="2025-09-12T05:48:20.311424013Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 05:48:20.311586 containerd[1558]: time="2025-09-12T05:48:20.311547976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 05:48:20.311617 containerd[1558]: time="2025-09-12T05:48:20.311583853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 05:48:20.311617 containerd[1558]: time="2025-09-12T05:48:20.311609581Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 05:48:20.311658 containerd[1558]: time="2025-09-12T05:48:20.311642513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 05:48:20.311678 containerd[1558]: time="2025-09-12T05:48:20.311663242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 05:48:20.311707 containerd[1558]: time="2025-09-12T05:48:20.311683119Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 05:48:20.311752 containerd[1558]: time="2025-09-12T05:48:20.311728464Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:48:20.311840 containerd[1558]: time="2025-09-12T05:48:20.311796943Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:48:20.311840 containerd[1558]: time="2025-09-12T05:48:20.311830916Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:48:20.311887 containerd[1558]: time="2025-09-12T05:48:20.311848840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:48:20.311887 containerd[1558]: time="2025-09-12T05:48:20.311861794Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 05:48:20.311936 containerd[1558]: time="2025-09-12T05:48:20.311884447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 05:48:20.311936 containerd[1558]: time="2025-09-12T05:48:20.311905516Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 05:48:20.311974 containerd[1558]: time="2025-09-12T05:48:20.311928670Z" level=info msg="runtime interface created" Sep 12 05:48:20.311974 containerd[1558]: time="2025-09-12T05:48:20.311944048Z" level=info msg="created NRI interface" Sep 12 05:48:20.311974 containerd[1558]: time="2025-09-12T05:48:20.311955179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 05:48:20.311974 containerd[1558]: time="2025-09-12T05:48:20.311972842Z" level=info msg="Connect containerd service" Sep 12 05:48:20.312132 containerd[1558]: time="2025-09-12T05:48:20.312112494Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 05:48:20.323210 containerd[1558]: time="2025-09-12T05:48:20.323122682Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 05:48:20.543908 containerd[1558]: time="2025-09-12T05:48:20.543822523Z" level=info msg="Start subscribing containerd event" Sep 12 05:48:20.543908 containerd[1558]: time="2025-09-12T05:48:20.543917261Z" level=info msg="Start recovering state" Sep 12 05:48:20.544112 containerd[1558]: time="2025-09-12T05:48:20.544049058Z" level=info msg="Start event monitor" Sep 12 05:48:20.544112 containerd[1558]: time="2025-09-12T05:48:20.544068595Z" level=info msg="Start cni network conf syncer for default" Sep 12 05:48:20.544112 containerd[1558]: time="2025-09-12T05:48:20.544076480Z" level=info msg="Start streaming server" Sep 12 05:48:20.544112 containerd[1558]: time="2025-09-12T05:48:20.544090386Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 05:48:20.544112 containerd[1558]: time="2025-09-12T05:48:20.544097489Z" level=info msg="runtime interface starting up..." Sep 12 05:48:20.544230 containerd[1558]: time="2025-09-12T05:48:20.544139568Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 05:48:20.544290 containerd[1558]: time="2025-09-12T05:48:20.544166508Z" level=info msg="starting plugins..." Sep 12 05:48:20.544346 containerd[1558]: time="2025-09-12T05:48:20.544324114Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 05:48:20.544534 containerd[1558]: time="2025-09-12T05:48:20.544215771Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 05:48:20.545931 containerd[1558]: time="2025-09-12T05:48:20.544598108Z" level=info msg="containerd successfully booted in 0.358512s" Sep 12 05:48:20.544718 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 05:48:20.551559 tar[1545]: linux-amd64/README.md Sep 12 05:48:20.579515 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 05:48:21.576249 systemd-networkd[1487]: eth0: Gained IPv6LL Sep 12 05:48:21.579731 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 05:48:21.581988 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 05:48:21.585372 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 05:48:21.587966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:21.610624 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 05:48:21.637260 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 05:48:21.640818 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 05:48:21.641162 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 05:48:21.642900 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 05:48:23.203766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:23.205534 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 05:48:23.206939 systemd[1]: Startup finished in 3.275s (kernel) + 7.064s (initrd) + 5.967s (userspace) = 16.307s. Sep 12 05:48:23.211366 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:48:23.506112 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 05:48:23.507926 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:60366.service - OpenSSH per-connection server daemon (10.0.0.1:60366). Sep 12 05:48:23.603905 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 60366 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:23.606232 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:23.613950 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 05:48:23.615282 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 05:48:23.623741 systemd-logind[1540]: New session 1 of user core. Sep 12 05:48:23.637118 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 05:48:23.641549 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 05:48:23.660793 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 05:48:23.663599 systemd-logind[1540]: New session c1 of user core. Sep 12 05:48:23.845847 systemd[1691]: Queued start job for default target default.target. Sep 12 05:48:23.862604 systemd[1691]: Created slice app.slice - User Application Slice. Sep 12 05:48:23.862636 systemd[1691]: Reached target paths.target - Paths. Sep 12 05:48:23.862685 systemd[1691]: Reached target timers.target - Timers. Sep 12 05:48:23.864532 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 05:48:23.893582 kubelet[1675]: E0912 05:48:23.893511 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:48:23.895830 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 05:48:23.895967 systemd[1691]: Reached target sockets.target - Sockets. Sep 12 05:48:23.896033 systemd[1691]: Reached target basic.target - Basic System. Sep 12 05:48:23.896078 systemd[1691]: Reached target default.target - Main User Target. Sep 12 05:48:23.896116 systemd[1691]: Startup finished in 212ms. Sep 12 05:48:23.896724 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 05:48:23.910136 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 05:48:23.910470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:48:23.910655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:48:23.910973 systemd[1]: kubelet.service: Consumed 2.089s CPU time, 267.9M memory peak. Sep 12 05:48:23.978299 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:60380.service - OpenSSH per-connection server daemon (10.0.0.1:60380). Sep 12 05:48:24.045561 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 60380 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:24.047759 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:24.054575 systemd-logind[1540]: New session 2 of user core. Sep 12 05:48:24.065243 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 05:48:24.124266 sshd[1708]: Connection closed by 10.0.0.1 port 60380 Sep 12 05:48:24.124567 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:24.141605 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:60380.service: Deactivated successfully. Sep 12 05:48:24.143633 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 05:48:24.144421 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Sep 12 05:48:24.147671 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:60388.service - OpenSSH per-connection server daemon (10.0.0.1:60388). Sep 12 05:48:24.148398 systemd-logind[1540]: Removed session 2. Sep 12 05:48:24.507165 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 60388 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:24.508418 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:24.512903 systemd-logind[1540]: New session 3 of user core. Sep 12 05:48:24.519143 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 05:48:24.568167 sshd[1718]: Connection closed by 10.0.0.1 port 60388 Sep 12 05:48:24.568499 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:24.576938 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:60388.service: Deactivated successfully. Sep 12 05:48:24.578745 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 05:48:24.579516 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Sep 12 05:48:24.582400 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:60390.service - OpenSSH per-connection server daemon (10.0.0.1:60390). Sep 12 05:48:24.583026 systemd-logind[1540]: Removed session 3. Sep 12 05:48:24.648708 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 60390 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:24.650315 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:24.654855 systemd-logind[1540]: New session 4 of user core. Sep 12 05:48:24.664125 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 05:48:24.717359 sshd[1727]: Connection closed by 10.0.0.1 port 60390 Sep 12 05:48:24.717763 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:24.736897 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:60390.service: Deactivated successfully. Sep 12 05:48:24.738789 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 05:48:24.739535 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Sep 12 05:48:24.742446 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:60398.service - OpenSSH per-connection server daemon (10.0.0.1:60398). Sep 12 05:48:24.743270 systemd-logind[1540]: Removed session 4. Sep 12 05:48:24.802975 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 60398 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:24.804923 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:24.811104 systemd-logind[1540]: New session 5 of user core. Sep 12 05:48:24.824319 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 05:48:24.885572 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 05:48:24.885916 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:24.905985 sudo[1737]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:24.907849 sshd[1736]: Connection closed by 10.0.0.1 port 60398 Sep 12 05:48:24.908482 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:24.926811 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:60398.service: Deactivated successfully. Sep 12 05:48:24.929209 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 05:48:24.930121 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Sep 12 05:48:24.933300 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:60400.service - OpenSSH per-connection server daemon (10.0.0.1:60400). Sep 12 05:48:24.933933 systemd-logind[1540]: Removed session 5. Sep 12 05:48:25.000640 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 60400 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:25.002310 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:25.007067 systemd-logind[1540]: New session 6 of user core. Sep 12 05:48:25.021123 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 05:48:25.076162 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 05:48:25.076468 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:25.195490 sudo[1748]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:25.202535 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 05:48:25.202859 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:25.214072 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:48:25.264829 augenrules[1770]: No rules Sep 12 05:48:25.266583 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:48:25.266889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:48:25.268276 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:25.269946 sshd[1746]: Connection closed by 10.0.0.1 port 60400 Sep 12 05:48:25.270466 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:25.282921 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:60400.service: Deactivated successfully. Sep 12 05:48:25.285511 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 05:48:25.286451 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Sep 12 05:48:25.289248 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:60410.service - OpenSSH per-connection server daemon (10.0.0.1:60410). Sep 12 05:48:25.289775 systemd-logind[1540]: Removed session 6. Sep 12 05:48:25.347625 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 60410 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:25.349174 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:25.353778 systemd-logind[1540]: New session 7 of user core. Sep 12 05:48:25.362174 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 05:48:25.415808 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 05:48:25.416270 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:26.043962 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 05:48:26.061441 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 05:48:26.558771 dockerd[1803]: time="2025-09-12T05:48:26.558678596Z" level=info msg="Starting up" Sep 12 05:48:26.559639 dockerd[1803]: time="2025-09-12T05:48:26.559607037Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 05:48:26.589861 dockerd[1803]: time="2025-09-12T05:48:26.589775503Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 05:48:26.975069 dockerd[1803]: time="2025-09-12T05:48:26.974765647Z" level=info msg="Loading containers: start." Sep 12 05:48:26.992040 kernel: Initializing XFRM netlink socket Sep 12 05:48:27.292823 systemd-networkd[1487]: docker0: Link UP Sep 12 05:48:27.299644 dockerd[1803]: time="2025-09-12T05:48:27.299583356Z" level=info msg="Loading containers: done." Sep 12 05:48:27.318013 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1518389111-merged.mount: Deactivated successfully. Sep 12 05:48:27.320807 dockerd[1803]: time="2025-09-12T05:48:27.320763156Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 05:48:27.320876 dockerd[1803]: time="2025-09-12T05:48:27.320846783Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 05:48:27.320973 dockerd[1803]: time="2025-09-12T05:48:27.320951459Z" level=info msg="Initializing buildkit" Sep 12 05:48:27.352650 dockerd[1803]: time="2025-09-12T05:48:27.352597136Z" level=info msg="Completed buildkit initialization" Sep 12 05:48:27.360496 dockerd[1803]: time="2025-09-12T05:48:27.360381495Z" level=info msg="Daemon has completed initialization" Sep 12 05:48:27.360658 dockerd[1803]: time="2025-09-12T05:48:27.360494306Z" level=info msg="API listen on /run/docker.sock" Sep 12 05:48:27.360922 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 05:48:28.659853 containerd[1558]: time="2025-09-12T05:48:28.659799076Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 05:48:29.847575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725774090.mount: Deactivated successfully. Sep 12 05:48:33.365734 containerd[1558]: time="2025-09-12T05:48:33.365639527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:33.367094 containerd[1558]: time="2025-09-12T05:48:33.366972407Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 05:48:33.368661 containerd[1558]: time="2025-09-12T05:48:33.368596152Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:33.375040 containerd[1558]: time="2025-09-12T05:48:33.373632096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:33.375917 containerd[1558]: time="2025-09-12T05:48:33.375871375Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 4.716021654s" Sep 12 05:48:33.376143 containerd[1558]: time="2025-09-12T05:48:33.376089665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 05:48:33.391464 containerd[1558]: time="2025-09-12T05:48:33.391398584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 05:48:34.161401 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 05:48:34.163609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:34.478684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:34.483075 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:48:34.561680 kubelet[2087]: E0912 05:48:34.561594 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:48:34.569090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:48:34.569293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:48:34.569712 systemd[1]: kubelet.service: Consumed 350ms CPU time, 110.3M memory peak. Sep 12 05:48:35.959306 containerd[1558]: time="2025-09-12T05:48:35.959205078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:35.962873 containerd[1558]: time="2025-09-12T05:48:35.962818545Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 05:48:35.975203 containerd[1558]: time="2025-09-12T05:48:35.975152525Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:35.977924 containerd[1558]: time="2025-09-12T05:48:35.977894638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:35.978909 containerd[1558]: time="2025-09-12T05:48:35.978856131Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.587419155s" Sep 12 05:48:35.978909 containerd[1558]: time="2025-09-12T05:48:35.978896587Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 05:48:35.979655 containerd[1558]: time="2025-09-12T05:48:35.979626916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 05:48:37.569222 containerd[1558]: time="2025-09-12T05:48:37.569130223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:37.569873 containerd[1558]: time="2025-09-12T05:48:37.569807853Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 05:48:37.571058 containerd[1558]: time="2025-09-12T05:48:37.571024816Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:37.573464 containerd[1558]: time="2025-09-12T05:48:37.573412614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:37.574565 containerd[1558]: time="2025-09-12T05:48:37.574489874Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.594833723s" Sep 12 05:48:37.574565 containerd[1558]: time="2025-09-12T05:48:37.574540559Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 05:48:37.575166 containerd[1558]: time="2025-09-12T05:48:37.575132930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 05:48:38.569605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278495861.mount: Deactivated successfully. Sep 12 05:48:39.019768 containerd[1558]: time="2025-09-12T05:48:39.019620229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:39.020461 containerd[1558]: time="2025-09-12T05:48:39.020398028Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 05:48:39.021615 containerd[1558]: time="2025-09-12T05:48:39.021580084Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:39.023395 containerd[1558]: time="2025-09-12T05:48:39.023355193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:39.023862 containerd[1558]: time="2025-09-12T05:48:39.023814254Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.448649855s" Sep 12 05:48:39.023896 containerd[1558]: time="2025-09-12T05:48:39.023859659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 05:48:39.024576 containerd[1558]: time="2025-09-12T05:48:39.024387369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 05:48:39.619562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999012200.mount: Deactivated successfully. Sep 12 05:48:40.886923 containerd[1558]: time="2025-09-12T05:48:40.886849080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:40.887746 containerd[1558]: time="2025-09-12T05:48:40.887703963Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 05:48:40.889287 containerd[1558]: time="2025-09-12T05:48:40.889210849Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:40.894147 containerd[1558]: time="2025-09-12T05:48:40.894086362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:40.895459 containerd[1558]: time="2025-09-12T05:48:40.895386280Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.870961952s" Sep 12 05:48:40.895459 containerd[1558]: time="2025-09-12T05:48:40.895447495Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 05:48:40.896678 containerd[1558]: time="2025-09-12T05:48:40.896562786Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 05:48:41.722451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355567163.mount: Deactivated successfully. Sep 12 05:48:41.730876 containerd[1558]: time="2025-09-12T05:48:41.730828112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:48:41.731636 containerd[1558]: time="2025-09-12T05:48:41.731590451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 05:48:41.732825 containerd[1558]: time="2025-09-12T05:48:41.732789490Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:48:41.734833 containerd[1558]: time="2025-09-12T05:48:41.734802385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:48:41.735406 containerd[1558]: time="2025-09-12T05:48:41.735367394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 838.772878ms" Sep 12 05:48:41.735448 containerd[1558]: time="2025-09-12T05:48:41.735407359Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 05:48:41.736041 containerd[1558]: time="2025-09-12T05:48:41.735927395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 05:48:42.457682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85568132.mount: Deactivated successfully. Sep 12 05:48:44.337208 containerd[1558]: time="2025-09-12T05:48:44.337137652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:44.337976 containerd[1558]: time="2025-09-12T05:48:44.337918266Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 05:48:44.339266 containerd[1558]: time="2025-09-12T05:48:44.339214908Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:44.342094 containerd[1558]: time="2025-09-12T05:48:44.342053421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:44.343284 containerd[1558]: time="2025-09-12T05:48:44.343249995Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.607287013s" Sep 12 05:48:44.343328 containerd[1558]: time="2025-09-12T05:48:44.343282286Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 05:48:44.773894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 05:48:44.775853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:44.982457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:44.986909 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:48:45.021467 kubelet[2254]: E0912 05:48:45.021397 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:48:45.025810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:48:45.026018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:48:45.026458 systemd[1]: kubelet.service: Consumed 206ms CPU time, 109.1M memory peak. Sep 12 05:48:47.715443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:47.715609 systemd[1]: kubelet.service: Consumed 206ms CPU time, 109.1M memory peak. Sep 12 05:48:47.717767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:47.743190 systemd[1]: Reload requested from client PID 2273 ('systemctl') (unit session-7.scope)... Sep 12 05:48:47.743205 systemd[1]: Reloading... Sep 12 05:48:47.861041 zram_generator::config[2315]: No configuration found. Sep 12 05:48:48.140835 systemd[1]: Reloading finished in 397 ms. Sep 12 05:48:48.226760 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 05:48:48.226866 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 05:48:48.227227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:48.227282 systemd[1]: kubelet.service: Consumed 175ms CPU time, 98.2M memory peak. Sep 12 05:48:48.228993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:48.406038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:48.420314 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:48:48.468549 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:48.468549 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 05:48:48.468549 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:48.469030 kubelet[2363]: I0912 05:48:48.468711 2363 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:48:48.548349 kubelet[2363]: I0912 05:48:48.548307 2363 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 05:48:48.548349 kubelet[2363]: I0912 05:48:48.548332 2363 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:48:48.548689 kubelet[2363]: I0912 05:48:48.548669 2363 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 05:48:48.616313 kubelet[2363]: E0912 05:48:48.616269 2363 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 05:48:48.616569 kubelet[2363]: I0912 05:48:48.616524 2363 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:48:48.628044 kubelet[2363]: I0912 05:48:48.627532 2363 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:48:48.633557 kubelet[2363]: I0912 05:48:48.633514 2363 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:48:48.633946 kubelet[2363]: I0912 05:48:48.633910 2363 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:48:48.634222 kubelet[2363]: I0912 05:48:48.633944 2363 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:48:48.634414 kubelet[2363]: I0912 05:48:48.634234 2363 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:48:48.634414 kubelet[2363]: I0912 05:48:48.634249 2363 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 05:48:48.634470 kubelet[2363]: I0912 05:48:48.634459 2363 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:48.638317 kubelet[2363]: I0912 05:48:48.638282 2363 kubelet.go:480] "Attempting to sync node with API server" Sep 12 05:48:48.638317 kubelet[2363]: I0912 05:48:48.638315 2363 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:48:48.638380 kubelet[2363]: I0912 05:48:48.638355 2363 kubelet.go:386] "Adding apiserver pod source" Sep 12 05:48:48.638405 kubelet[2363]: I0912 05:48:48.638392 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:48:48.645559 kubelet[2363]: E0912 05:48:48.645421 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 05:48:48.645799 kubelet[2363]: I0912 05:48:48.645771 2363 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:48:48.646463 kubelet[2363]: I0912 05:48:48.646421 2363 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 05:48:48.646743 kubelet[2363]: E0912 05:48:48.646705 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 05:48:48.647111 kubelet[2363]: W0912 05:48:48.647078 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 05:48:48.650334 kubelet[2363]: I0912 05:48:48.650307 2363 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 05:48:48.650385 kubelet[2363]: I0912 05:48:48.650364 2363 server.go:1289] "Started kubelet" Sep 12 05:48:48.652235 kubelet[2363]: I0912 05:48:48.652206 2363 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:48:48.652379 kubelet[2363]: I0912 05:48:48.652351 2363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:48:48.653579 kubelet[2363]: I0912 05:48:48.653560 2363 server.go:317] "Adding debug handlers to kubelet server" Sep 12 05:48:48.654456 kubelet[2363]: I0912 05:48:48.652220 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:48:48.657947 kubelet[2363]: E0912 05:48:48.657711 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:48.657947 kubelet[2363]: I0912 05:48:48.657751 2363 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 05:48:48.660693 kubelet[2363]: I0912 05:48:48.659943 2363 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 05:48:48.660693 kubelet[2363]: I0912 05:48:48.660138 2363 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:48:48.661281 kubelet[2363]: E0912 05:48:48.661244 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 05:48:48.661499 kubelet[2363]: E0912 05:48:48.661478 2363 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:48:48.661693 kubelet[2363]: I0912 05:48:48.661673 2363 factory.go:223] Registration of the systemd container factory successfully Sep 12 05:48:48.661909 kubelet[2363]: I0912 05:48:48.661881 2363 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:48:48.663531 kubelet[2363]: E0912 05:48:48.663273 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Sep 12 05:48:48.663531 kubelet[2363]: E0912 05:48:48.657680 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186472efc17dc114 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:48:48.650330388 +0000 UTC m=+0.222716403,LastTimestamp:2025-09-12 05:48:48.650330388 +0000 UTC m=+0.222716403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:48:48.664673 kubelet[2363]: I0912 05:48:48.664592 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:48:48.664938 kubelet[2363]: I0912 05:48:48.664919 2363 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:48:48.666225 kubelet[2363]: I0912 05:48:48.666195 2363 factory.go:223] Registration of the containerd container factory successfully Sep 12 05:48:48.758505 kubelet[2363]: E0912 05:48:48.758410 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:48.819130 kubelet[2363]: I0912 05:48:48.819069 2363 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 05:48:48.819130 kubelet[2363]: I0912 05:48:48.819093 2363 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 05:48:48.819130 kubelet[2363]: I0912 05:48:48.819111 2363 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:48.819307 kubelet[2363]: I0912 05:48:48.819209 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 05:48:48.820934 kubelet[2363]: I0912 05:48:48.820911 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 05:48:48.820989 kubelet[2363]: I0912 05:48:48.820947 2363 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 05:48:48.820989 kubelet[2363]: I0912 05:48:48.820988 2363 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 05:48:48.821080 kubelet[2363]: I0912 05:48:48.821022 2363 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 05:48:48.821281 kubelet[2363]: E0912 05:48:48.821074 2363 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:48:48.821776 kubelet[2363]: E0912 05:48:48.821744 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 05:48:48.858762 kubelet[2363]: E0912 05:48:48.858694 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:48.864625 kubelet[2363]: E0912 05:48:48.864595 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Sep 12 05:48:48.921758 kubelet[2363]: E0912 05:48:48.921662 2363 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 05:48:48.959173 kubelet[2363]: E0912 05:48:48.959141 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:49.059883 kubelet[2363]: E0912 05:48:49.059806 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:49.122671 kubelet[2363]: E0912 05:48:49.122633 2363 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 05:48:49.160352 kubelet[2363]: E0912 05:48:49.160296 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:49.260956 kubelet[2363]: E0912 05:48:49.260735 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:49.265562 kubelet[2363]: E0912 05:48:49.265524 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Sep 12 05:48:49.360927 kubelet[2363]: E0912 05:48:49.360809 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:49.414578 kubelet[2363]: I0912 05:48:49.414495 2363 policy_none.go:49] "None policy: Start" Sep 12 05:48:49.414578 kubelet[2363]: I0912 05:48:49.414529 2363 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 05:48:49.414578 kubelet[2363]: I0912 05:48:49.414548 2363 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:48:49.423420 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 05:48:49.438466 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 05:48:49.441829 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 05:48:49.460985 kubelet[2363]: E0912 05:48:49.460943 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:49.461237 kubelet[2363]: E0912 05:48:49.461208 2363 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 05:48:49.461501 kubelet[2363]: I0912 05:48:49.461479 2363 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:48:49.461552 kubelet[2363]: I0912 05:48:49.461508 2363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:48:49.461831 kubelet[2363]: I0912 05:48:49.461800 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:48:49.463443 kubelet[2363]: E0912 05:48:49.463422 2363 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 05:48:49.463526 kubelet[2363]: E0912 05:48:49.463468 2363 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 05:48:49.538013 systemd[1]: Created slice kubepods-burstable-podc8782b8fa9ee2e2ad5c5f36c25d3839f.slice - libcontainer container kubepods-burstable-podc8782b8fa9ee2e2ad5c5f36c25d3839f.slice. Sep 12 05:48:49.562811 kubelet[2363]: I0912 05:48:49.562745 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:49.563288 kubelet[2363]: I0912 05:48:49.563268 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8782b8fa9ee2e2ad5c5f36c25d3839f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8782b8fa9ee2e2ad5c5f36c25d3839f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:49.563341 kubelet[2363]: E0912 05:48:49.563282 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 12 05:48:49.563341 kubelet[2363]: I0912 05:48:49.563304 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:49.563341 kubelet[2363]: I0912 05:48:49.563325 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8782b8fa9ee2e2ad5c5f36c25d3839f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8782b8fa9ee2e2ad5c5f36c25d3839f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:49.563341 kubelet[2363]: I0912 05:48:49.563342 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8782b8fa9ee2e2ad5c5f36c25d3839f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8782b8fa9ee2e2ad5c5f36c25d3839f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:49.563457 kubelet[2363]: I0912 05:48:49.563356 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:49.563457 kubelet[2363]: I0912 05:48:49.563371 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:49.563457 kubelet[2363]: I0912 05:48:49.563387 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:49.563457 kubelet[2363]: I0912 05:48:49.563402 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:49.563457 kubelet[2363]: I0912 05:48:49.563417 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:49.565270 kubelet[2363]: E0912 05:48:49.565227 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:49.568335 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 05:48:49.582350 kubelet[2363]: E0912 05:48:49.582298 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:49.585220 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 05:48:49.587336 kubelet[2363]: E0912 05:48:49.587297 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:49.596765 kubelet[2363]: E0912 05:48:49.596704 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 05:48:49.765718 kubelet[2363]: I0912 05:48:49.765660 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:49.766094 kubelet[2363]: E0912 05:48:49.766063 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 12 05:48:49.866493 kubelet[2363]: E0912 05:48:49.866318 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:49.867262 containerd[1558]: time="2025-09-12T05:48:49.867212967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8782b8fa9ee2e2ad5c5f36c25d3839f,Namespace:kube-system,Attempt:0,}" Sep 12 05:48:49.883597 kubelet[2363]: E0912 05:48:49.883542 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:49.884110 containerd[1558]: time="2025-09-12T05:48:49.884071634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 05:48:49.888726 kubelet[2363]: E0912 05:48:49.888665 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:49.889237 containerd[1558]: time="2025-09-12T05:48:49.889196014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 05:48:49.898189 kubelet[2363]: E0912 05:48:49.898117 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 05:48:49.898472 containerd[1558]: time="2025-09-12T05:48:49.898438697Z" level=info msg="connecting to shim e0189a89cc5daed9bbd3f86e9fcf713246c7f70829fcbda44b68804d1c33a063" address="unix:///run/containerd/s/d164fe2695bb68ce8fc038e69e153fa7ce6494e62b7fcaa9c2619a9dc3a35876" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:49.932208 containerd[1558]: time="2025-09-12T05:48:49.932143515Z" level=info msg="connecting to shim 3177b37175691f2d4e33ac5004211686ef283b0214506b878cd360151be00dd3" address="unix:///run/containerd/s/1b9109565c387c93f27c602740e8725438d1b3e15fbaddbba06c3e64766c8917" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:49.946073 containerd[1558]: time="2025-09-12T05:48:49.943215509Z" level=info msg="connecting to shim 80489fe9f591b174fbef1c3dc5250e7df1e3622a8c9f380d032534e75d66fe54" address="unix:///run/containerd/s/76f8fb31bbeec3509fc7b564a0a37d99f439a9434b5c65d5053c7de5f615c5b1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:49.969274 systemd[1]: Started cri-containerd-e0189a89cc5daed9bbd3f86e9fcf713246c7f70829fcbda44b68804d1c33a063.scope - libcontainer container e0189a89cc5daed9bbd3f86e9fcf713246c7f70829fcbda44b68804d1c33a063. Sep 12 05:48:49.997183 systemd[1]: Started cri-containerd-3177b37175691f2d4e33ac5004211686ef283b0214506b878cd360151be00dd3.scope - libcontainer container 3177b37175691f2d4e33ac5004211686ef283b0214506b878cd360151be00dd3. Sep 12 05:48:50.002946 systemd[1]: Started cri-containerd-80489fe9f591b174fbef1c3dc5250e7df1e3622a8c9f380d032534e75d66fe54.scope - libcontainer container 80489fe9f591b174fbef1c3dc5250e7df1e3622a8c9f380d032534e75d66fe54. Sep 12 05:48:50.196451 kubelet[2363]: E0912 05:48:50.196241 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Sep 12 05:48:50.198556 kubelet[2363]: I0912 05:48:50.198519 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:50.199106 kubelet[2363]: E0912 05:48:50.199081 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 12 05:48:50.201337 containerd[1558]: time="2025-09-12T05:48:50.201296022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8782b8fa9ee2e2ad5c5f36c25d3839f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0189a89cc5daed9bbd3f86e9fcf713246c7f70829fcbda44b68804d1c33a063\"" Sep 12 05:48:50.202498 kubelet[2363]: E0912 05:48:50.202470 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:50.207207 containerd[1558]: time="2025-09-12T05:48:50.207166381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3177b37175691f2d4e33ac5004211686ef283b0214506b878cd360151be00dd3\"" Sep 12 05:48:50.207707 containerd[1558]: time="2025-09-12T05:48:50.207666970Z" level=info msg="CreateContainer within sandbox \"e0189a89cc5daed9bbd3f86e9fcf713246c7f70829fcbda44b68804d1c33a063\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 05:48:50.207888 kubelet[2363]: E0912 05:48:50.207864 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:50.209743 containerd[1558]: time="2025-09-12T05:48:50.209714751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"80489fe9f591b174fbef1c3dc5250e7df1e3622a8c9f380d032534e75d66fe54\"" Sep 12 05:48:50.210276 kubelet[2363]: E0912 05:48:50.210249 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:50.212119 containerd[1558]: time="2025-09-12T05:48:50.212077542Z" level=info msg="CreateContainer within sandbox \"3177b37175691f2d4e33ac5004211686ef283b0214506b878cd360151be00dd3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 05:48:50.214634 kubelet[2363]: E0912 05:48:50.214589 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 05:48:50.221391 containerd[1558]: time="2025-09-12T05:48:50.221354650Z" level=info msg="Container 9c22b17ef6960e2efe5d897565d856204e68e2be0a0e17441d8ffee754f7f7da: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:50.225754 containerd[1558]: time="2025-09-12T05:48:50.225721639Z" level=info msg="CreateContainer within sandbox \"80489fe9f591b174fbef1c3dc5250e7df1e3622a8c9f380d032534e75d66fe54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 05:48:50.230563 containerd[1558]: time="2025-09-12T05:48:50.230505982Z" level=info msg="Container 4c5da7bac9ebbe71ff9f9de23bc7818e13dc21d9524b2202dfca0778ca66b4c9: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:50.238647 kubelet[2363]: E0912 05:48:50.238590 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 05:48:50.241588 containerd[1558]: time="2025-09-12T05:48:50.241551606Z" level=info msg="CreateContainer within sandbox \"3177b37175691f2d4e33ac5004211686ef283b0214506b878cd360151be00dd3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c22b17ef6960e2efe5d897565d856204e68e2be0a0e17441d8ffee754f7f7da\"" Sep 12 05:48:50.242256 containerd[1558]: time="2025-09-12T05:48:50.242199982Z" level=info msg="CreateContainer within sandbox \"e0189a89cc5daed9bbd3f86e9fcf713246c7f70829fcbda44b68804d1c33a063\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c5da7bac9ebbe71ff9f9de23bc7818e13dc21d9524b2202dfca0778ca66b4c9\"" Sep 12 05:48:50.242352 containerd[1558]: time="2025-09-12T05:48:50.242318995Z" level=info msg="StartContainer for \"9c22b17ef6960e2efe5d897565d856204e68e2be0a0e17441d8ffee754f7f7da\"" Sep 12 05:48:50.242666 containerd[1558]: time="2025-09-12T05:48:50.242632924Z" level=info msg="StartContainer for \"4c5da7bac9ebbe71ff9f9de23bc7818e13dc21d9524b2202dfca0778ca66b4c9\"" Sep 12 05:48:50.243907 containerd[1558]: time="2025-09-12T05:48:50.243879892Z" level=info msg="connecting to shim 4c5da7bac9ebbe71ff9f9de23bc7818e13dc21d9524b2202dfca0778ca66b4c9" address="unix:///run/containerd/s/d164fe2695bb68ce8fc038e69e153fa7ce6494e62b7fcaa9c2619a9dc3a35876" protocol=ttrpc version=3 Sep 12 05:48:50.243965 containerd[1558]: time="2025-09-12T05:48:50.243899329Z" level=info msg="connecting to shim 9c22b17ef6960e2efe5d897565d856204e68e2be0a0e17441d8ffee754f7f7da" address="unix:///run/containerd/s/1b9109565c387c93f27c602740e8725438d1b3e15fbaddbba06c3e64766c8917" protocol=ttrpc version=3 Sep 12 05:48:50.247247 containerd[1558]: time="2025-09-12T05:48:50.247212482Z" level=info msg="Container 1d675d54805b7fc97c5d532d646220d0d83d4cf2a2815a474f15acbb60522cfa: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:50.259600 containerd[1558]: time="2025-09-12T05:48:50.259483044Z" level=info msg="CreateContainer within sandbox \"80489fe9f591b174fbef1c3dc5250e7df1e3622a8c9f380d032534e75d66fe54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d675d54805b7fc97c5d532d646220d0d83d4cf2a2815a474f15acbb60522cfa\"" Sep 12 05:48:50.260515 containerd[1558]: time="2025-09-12T05:48:50.260478150Z" level=info msg="StartContainer for \"1d675d54805b7fc97c5d532d646220d0d83d4cf2a2815a474f15acbb60522cfa\"" Sep 12 05:48:50.262211 containerd[1558]: time="2025-09-12T05:48:50.261983904Z" level=info msg="connecting to shim 1d675d54805b7fc97c5d532d646220d0d83d4cf2a2815a474f15acbb60522cfa" address="unix:///run/containerd/s/76f8fb31bbeec3509fc7b564a0a37d99f439a9434b5c65d5053c7de5f615c5b1" protocol=ttrpc version=3 Sep 12 05:48:50.267195 systemd[1]: Started cri-containerd-9c22b17ef6960e2efe5d897565d856204e68e2be0a0e17441d8ffee754f7f7da.scope - libcontainer container 9c22b17ef6960e2efe5d897565d856204e68e2be0a0e17441d8ffee754f7f7da. Sep 12 05:48:50.320618 systemd[1]: Started cri-containerd-4c5da7bac9ebbe71ff9f9de23bc7818e13dc21d9524b2202dfca0778ca66b4c9.scope - libcontainer container 4c5da7bac9ebbe71ff9f9de23bc7818e13dc21d9524b2202dfca0778ca66b4c9. Sep 12 05:48:50.339173 systemd[1]: Started cri-containerd-1d675d54805b7fc97c5d532d646220d0d83d4cf2a2815a474f15acbb60522cfa.scope - libcontainer container 1d675d54805b7fc97c5d532d646220d0d83d4cf2a2815a474f15acbb60522cfa. Sep 12 05:48:50.416160 containerd[1558]: time="2025-09-12T05:48:50.416104585Z" level=info msg="StartContainer for \"1d675d54805b7fc97c5d532d646220d0d83d4cf2a2815a474f15acbb60522cfa\" returns successfully" Sep 12 05:48:50.416384 containerd[1558]: time="2025-09-12T05:48:50.416338885Z" level=info msg="StartContainer for \"4c5da7bac9ebbe71ff9f9de23bc7818e13dc21d9524b2202dfca0778ca66b4c9\" returns successfully" Sep 12 05:48:50.476465 containerd[1558]: time="2025-09-12T05:48:50.476343284Z" level=info msg="StartContainer for \"9c22b17ef6960e2efe5d897565d856204e68e2be0a0e17441d8ffee754f7f7da\" returns successfully" Sep 12 05:48:50.835809 kubelet[2363]: E0912 05:48:50.835751 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:50.836303 kubelet[2363]: E0912 05:48:50.835916 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:50.840557 kubelet[2363]: E0912 05:48:50.840509 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:50.840649 kubelet[2363]: E0912 05:48:50.840617 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:50.842718 kubelet[2363]: E0912 05:48:50.842691 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:50.842875 kubelet[2363]: E0912 05:48:50.842838 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:51.001222 kubelet[2363]: I0912 05:48:51.001183 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:51.800148 kubelet[2363]: E0912 05:48:51.800098 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 05:48:51.845419 kubelet[2363]: I0912 05:48:51.845362 2363 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 05:48:51.845419 kubelet[2363]: E0912 05:48:51.845412 2363 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 05:48:51.847383 kubelet[2363]: E0912 05:48:51.847346 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:51.847499 kubelet[2363]: E0912 05:48:51.847476 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:51.848895 kubelet[2363]: E0912 05:48:51.848861 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:51.850032 kubelet[2363]: E0912 05:48:51.848978 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:51.858597 kubelet[2363]: E0912 05:48:51.858516 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:51.959635 kubelet[2363]: E0912 05:48:51.959517 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:52.060537 kubelet[2363]: E0912 05:48:52.060419 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:52.164075 kubelet[2363]: I0912 05:48:52.164040 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:52.170287 kubelet[2363]: E0912 05:48:52.170235 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:52.170287 kubelet[2363]: I0912 05:48:52.170280 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:52.172101 kubelet[2363]: E0912 05:48:52.172078 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:52.172101 kubelet[2363]: I0912 05:48:52.172097 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:52.174064 kubelet[2363]: E0912 05:48:52.174045 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:52.641738 kubelet[2363]: I0912 05:48:52.641670 2363 apiserver.go:52] "Watching apiserver" Sep 12 05:48:52.660753 kubelet[2363]: I0912 05:48:52.660680 2363 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 05:48:52.860033 kubelet[2363]: I0912 05:48:52.859471 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:52.865361 kubelet[2363]: E0912 05:48:52.865312 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:52.892206 kubelet[2363]: I0912 05:48:52.891056 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:52.897311 kubelet[2363]: E0912 05:48:52.897270 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:53.864025 kubelet[2363]: E0912 05:48:53.863271 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:53.864025 kubelet[2363]: E0912 05:48:53.863912 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:53.867798 systemd[1]: Reload requested from client PID 2653 ('systemctl') (unit session-7.scope)... Sep 12 05:48:53.867819 systemd[1]: Reloading... Sep 12 05:48:54.004074 zram_generator::config[2699]: No configuration found. Sep 12 05:48:54.207958 systemd[1]: Reloading finished in 339 ms. Sep 12 05:48:54.283240 kubelet[2363]: I0912 05:48:54.283192 2363 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:48:54.283783 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:54.302565 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 05:48:54.302972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:54.303155 systemd[1]: kubelet.service: Consumed 888ms CPU time, 130.9M memory peak. Sep 12 05:48:54.307268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:54.524498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:54.538414 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:48:54.594152 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:54.594569 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 05:48:54.594569 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:54.594569 kubelet[2741]: I0912 05:48:54.594387 2741 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:48:54.601364 kubelet[2741]: I0912 05:48:54.601308 2741 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 05:48:54.601364 kubelet[2741]: I0912 05:48:54.601347 2741 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:48:54.601713 kubelet[2741]: I0912 05:48:54.601632 2741 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 05:48:54.603163 kubelet[2741]: I0912 05:48:54.603090 2741 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 05:48:54.605789 kubelet[2741]: I0912 05:48:54.605753 2741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:48:54.610806 kubelet[2741]: I0912 05:48:54.610771 2741 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:48:54.617556 kubelet[2741]: I0912 05:48:54.617489 2741 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:48:54.617772 kubelet[2741]: I0912 05:48:54.617733 2741 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:48:54.617933 kubelet[2741]: I0912 05:48:54.617765 2741 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:48:54.618075 kubelet[2741]: I0912 05:48:54.617966 2741 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:48:54.618075 kubelet[2741]: I0912 05:48:54.617977 2741 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 05:48:54.618075 kubelet[2741]: I0912 05:48:54.618046 2741 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:54.618265 kubelet[2741]: I0912 05:48:54.618246 2741 kubelet.go:480] "Attempting to sync node with API server" Sep 12 05:48:54.618265 kubelet[2741]: I0912 05:48:54.618267 2741 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:48:54.618339 kubelet[2741]: I0912 05:48:54.618296 2741 kubelet.go:386] "Adding apiserver pod source" Sep 12 05:48:54.618339 kubelet[2741]: I0912 05:48:54.618314 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:48:54.619605 kubelet[2741]: I0912 05:48:54.619582 2741 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:48:54.621032 kubelet[2741]: I0912 05:48:54.620315 2741 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 05:48:54.626442 kubelet[2741]: I0912 05:48:54.626419 2741 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 05:48:54.626497 kubelet[2741]: I0912 05:48:54.626479 2741 server.go:1289] "Started kubelet" Sep 12 05:48:54.626748 kubelet[2741]: I0912 05:48:54.626701 2741 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:48:54.629461 kubelet[2741]: I0912 05:48:54.628328 2741 server.go:317] "Adding debug handlers to kubelet server" Sep 12 05:48:54.629461 kubelet[2741]: I0912 05:48:54.629107 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:48:54.629461 kubelet[2741]: I0912 05:48:54.629408 2741 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:48:54.631511 kubelet[2741]: I0912 05:48:54.630418 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:48:54.632601 kubelet[2741]: E0912 05:48:54.632561 2741 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:48:54.632986 kubelet[2741]: I0912 05:48:54.632969 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:48:54.635644 kubelet[2741]: I0912 05:48:54.635147 2741 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 05:48:54.635644 kubelet[2741]: I0912 05:48:54.635262 2741 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 05:48:54.635644 kubelet[2741]: I0912 05:48:54.635382 2741 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:48:54.636062 kubelet[2741]: I0912 05:48:54.636037 2741 factory.go:223] Registration of the systemd container factory successfully Sep 12 05:48:54.636185 kubelet[2741]: I0912 05:48:54.636160 2741 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:48:54.639067 kubelet[2741]: I0912 05:48:54.638982 2741 factory.go:223] Registration of the containerd container factory successfully Sep 12 05:48:54.651170 kubelet[2741]: I0912 05:48:54.651072 2741 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 05:48:54.652526 kubelet[2741]: I0912 05:48:54.652492 2741 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 05:48:54.652526 kubelet[2741]: I0912 05:48:54.652527 2741 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 05:48:54.652635 kubelet[2741]: I0912 05:48:54.652553 2741 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 05:48:54.652635 kubelet[2741]: I0912 05:48:54.652565 2741 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 05:48:54.652703 kubelet[2741]: E0912 05:48:54.652618 2741 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:48:54.682317 kubelet[2741]: I0912 05:48:54.682281 2741 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 05:48:54.682317 kubelet[2741]: I0912 05:48:54.682300 2741 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 05:48:54.682317 kubelet[2741]: I0912 05:48:54.682321 2741 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:54.682533 kubelet[2741]: I0912 05:48:54.682456 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 05:48:54.682533 kubelet[2741]: I0912 05:48:54.682469 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 05:48:54.682533 kubelet[2741]: I0912 05:48:54.682485 2741 policy_none.go:49] "None policy: Start" Sep 12 05:48:54.682533 kubelet[2741]: I0912 05:48:54.682495 2741 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 05:48:54.682533 kubelet[2741]: I0912 05:48:54.682506 2741 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:48:54.682675 kubelet[2741]: I0912 05:48:54.682588 2741 state_mem.go:75] "Updated machine memory state" Sep 12 05:48:54.688150 kubelet[2741]: E0912 05:48:54.688021 2741 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 05:48:54.688407 kubelet[2741]: I0912 05:48:54.688364 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:48:54.688407 kubelet[2741]: I0912 05:48:54.688386 2741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:48:54.688693 kubelet[2741]: I0912 05:48:54.688671 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:48:54.691447 kubelet[2741]: E0912 05:48:54.691406 2741 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 05:48:54.754117 kubelet[2741]: I0912 05:48:54.754049 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:54.754117 kubelet[2741]: I0912 05:48:54.754116 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:54.754287 kubelet[2741]: I0912 05:48:54.754053 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:54.760640 kubelet[2741]: E0912 05:48:54.760590 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:54.760845 kubelet[2741]: E0912 05:48:54.760757 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:54.797183 kubelet[2741]: I0912 05:48:54.797052 2741 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:54.804208 kubelet[2741]: I0912 05:48:54.804163 2741 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 05:48:54.804389 kubelet[2741]: I0912 05:48:54.804253 2741 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 05:48:54.863417 sudo[2783]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 05:48:54.863811 sudo[2783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 05:48:54.936828 kubelet[2741]: I0912 05:48:54.936623 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8782b8fa9ee2e2ad5c5f36c25d3839f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8782b8fa9ee2e2ad5c5f36c25d3839f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:54.936828 kubelet[2741]: I0912 05:48:54.936666 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8782b8fa9ee2e2ad5c5f36c25d3839f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8782b8fa9ee2e2ad5c5f36c25d3839f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:54.936828 kubelet[2741]: I0912 05:48:54.936695 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:54.936828 kubelet[2741]: I0912 05:48:54.936712 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:54.936828 kubelet[2741]: I0912 05:48:54.936730 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:54.937134 kubelet[2741]: I0912 05:48:54.936746 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8782b8fa9ee2e2ad5c5f36c25d3839f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8782b8fa9ee2e2ad5c5f36c25d3839f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:54.937134 kubelet[2741]: I0912 05:48:54.936781 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:54.937134 kubelet[2741]: I0912 05:48:54.936856 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:54.937134 kubelet[2741]: I0912 05:48:54.936900 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:55.060568 kubelet[2741]: E0912 05:48:55.060448 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.061599 kubelet[2741]: E0912 05:48:55.061499 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.061790 kubelet[2741]: E0912 05:48:55.061771 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.176495 sudo[2783]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:55.619691 kubelet[2741]: I0912 05:48:55.619629 2741 apiserver.go:52] "Watching apiserver" Sep 12 05:48:55.636225 kubelet[2741]: I0912 05:48:55.636196 2741 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 05:48:55.670375 kubelet[2741]: E0912 05:48:55.670345 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.670519 kubelet[2741]: E0912 05:48:55.670462 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.670751 kubelet[2741]: I0912 05:48:55.670728 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:55.753695 kubelet[2741]: E0912 05:48:55.753452 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:55.753695 kubelet[2741]: E0912 05:48:55.753652 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.783242 kubelet[2741]: I0912 05:48:55.783157 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7831298549999999 podStartE2EDuration="1.783129855s" podCreationTimestamp="2025-09-12 05:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:48:55.774944136 +0000 UTC m=+1.231398034" watchObservedRunningTime="2025-09-12 05:48:55.783129855 +0000 UTC m=+1.239583753" Sep 12 05:48:55.791057 kubelet[2741]: I0912 05:48:55.790761 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.79074217 podStartE2EDuration="3.79074217s" podCreationTimestamp="2025-09-12 05:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:48:55.783287387 +0000 UTC m=+1.239741275" watchObservedRunningTime="2025-09-12 05:48:55.79074217 +0000 UTC m=+1.247196078" Sep 12 05:48:55.797968 kubelet[2741]: I0912 05:48:55.797891 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.797873784 podStartE2EDuration="3.797873784s" podCreationTimestamp="2025-09-12 05:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:48:55.790699347 +0000 UTC m=+1.247153245" watchObservedRunningTime="2025-09-12 05:48:55.797873784 +0000 UTC m=+1.254327683" Sep 12 05:48:56.493519 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:56.494978 sshd[1782]: Connection closed by 10.0.0.1 port 60410 Sep 12 05:48:56.495626 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:56.499329 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:60410.service: Deactivated successfully. Sep 12 05:48:56.501800 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 05:48:56.502104 systemd[1]: session-7.scope: Consumed 5.745s CPU time, 262M memory peak. Sep 12 05:48:56.503418 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Sep 12 05:48:56.504794 systemd-logind[1540]: Removed session 7. Sep 12 05:48:56.671790 kubelet[2741]: E0912 05:48:56.671744 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:56.672390 kubelet[2741]: E0912 05:48:56.671752 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:57.673657 kubelet[2741]: E0912 05:48:57.673601 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:59.011556 kubelet[2741]: I0912 05:48:59.011504 2741 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 05:48:59.012100 containerd[1558]: time="2025-09-12T05:48:59.011912884Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 05:48:59.012391 kubelet[2741]: I0912 05:48:59.012126 2741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 05:49:00.338287 systemd[1]: Created slice kubepods-burstable-pod782d0b1e_f95d_4104_bd6f_40c00ecd3c54.slice - libcontainer container kubepods-burstable-pod782d0b1e_f95d_4104_bd6f_40c00ecd3c54.slice. Sep 12 05:49:00.351191 systemd[1]: Created slice kubepods-besteffort-podda55031c_7fbd_45c8_a2c7_650062a3c736.slice - libcontainer container kubepods-besteffort-podda55031c_7fbd_45c8_a2c7_650062a3c736.slice. Sep 12 05:49:00.369622 kubelet[2741]: I0912 05:49:00.369572 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-clustermesh-secrets\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.369622 kubelet[2741]: I0912 05:49:00.369620 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-run\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370305 kubelet[2741]: I0912 05:49:00.369650 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-etc-cni-netd\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370305 kubelet[2741]: I0912 05:49:00.369676 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-config-path\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370305 kubelet[2741]: I0912 05:49:00.369698 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cni-path\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370305 kubelet[2741]: I0912 05:49:00.369718 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-bpf-maps\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370305 kubelet[2741]: I0912 05:49:00.369751 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hostproc\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370305 kubelet[2741]: I0912 05:49:00.369773 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-cgroup\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370502 kubelet[2741]: I0912 05:49:00.369812 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-lib-modules\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.370502 kubelet[2741]: I0912 05:49:00.369848 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-xtables-lock\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.394042 systemd[1]: Created slice kubepods-besteffort-podc03efb06_e120_4665_9773_3851bdcb9833.slice - libcontainer container kubepods-besteffort-podc03efb06_e120_4665_9773_3851bdcb9833.slice. Sep 12 05:49:00.470610 kubelet[2741]: I0912 05:49:00.470554 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-kernel\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.470610 kubelet[2741]: I0912 05:49:00.470611 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da55031c-7fbd-45c8-a2c7-650062a3c736-kube-proxy\") pod \"kube-proxy-dqvb7\" (UID: \"da55031c-7fbd-45c8-a2c7-650062a3c736\") " pod="kube-system/kube-proxy-dqvb7" Sep 12 05:49:00.470610 kubelet[2741]: I0912 05:49:00.470631 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da55031c-7fbd-45c8-a2c7-650062a3c736-lib-modules\") pod \"kube-proxy-dqvb7\" (UID: \"da55031c-7fbd-45c8-a2c7-650062a3c736\") " pod="kube-system/kube-proxy-dqvb7" Sep 12 05:49:00.470866 kubelet[2741]: I0912 05:49:00.470690 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da55031c-7fbd-45c8-a2c7-650062a3c736-xtables-lock\") pod \"kube-proxy-dqvb7\" (UID: \"da55031c-7fbd-45c8-a2c7-650062a3c736\") " pod="kube-system/kube-proxy-dqvb7" Sep 12 05:49:00.470941 kubelet[2741]: I0912 05:49:00.470864 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-net\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.471017 kubelet[2741]: I0912 05:49:00.470955 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hubble-tls\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.471073 kubelet[2741]: I0912 05:49:00.471039 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9wf\" (UniqueName: \"kubernetes.io/projected/da55031c-7fbd-45c8-a2c7-650062a3c736-kube-api-access-7z9wf\") pod \"kube-proxy-dqvb7\" (UID: \"da55031c-7fbd-45c8-a2c7-650062a3c736\") " pod="kube-system/kube-proxy-dqvb7" Sep 12 05:49:00.471172 kubelet[2741]: I0912 05:49:00.471145 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n5bh\" (UniqueName: \"kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-kube-api-access-8n5bh\") pod \"cilium-fhkkt\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " pod="kube-system/cilium-fhkkt" Sep 12 05:49:00.572374 kubelet[2741]: I0912 05:49:00.572265 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03efb06-e120-4665-9773-3851bdcb9833-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5gzdq\" (UID: \"c03efb06-e120-4665-9773-3851bdcb9833\") " pod="kube-system/cilium-operator-6c4d7847fc-5gzdq" Sep 12 05:49:00.572374 kubelet[2741]: I0912 05:49:00.572353 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrtl\" (UniqueName: \"kubernetes.io/projected/c03efb06-e120-4665-9773-3851bdcb9833-kube-api-access-5jrtl\") pod \"cilium-operator-6c4d7847fc-5gzdq\" (UID: \"c03efb06-e120-4665-9773-3851bdcb9833\") " pod="kube-system/cilium-operator-6c4d7847fc-5gzdq" Sep 12 05:49:00.645833 kubelet[2741]: E0912 05:49:00.645655 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:00.646606 containerd[1558]: time="2025-09-12T05:49:00.646554265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhkkt,Uid:782d0b1e-f95d-4104-bd6f-40c00ecd3c54,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:00.660195 kubelet[2741]: E0912 05:49:00.660135 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:00.660737 containerd[1558]: time="2025-09-12T05:49:00.660700869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqvb7,Uid:da55031c-7fbd-45c8-a2c7-650062a3c736,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:00.698728 kubelet[2741]: E0912 05:49:00.698522 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:00.699891 containerd[1558]: time="2025-09-12T05:49:00.699825846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5gzdq,Uid:c03efb06-e120-4665-9773-3851bdcb9833,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:00.775145 containerd[1558]: time="2025-09-12T05:49:00.775069604Z" level=info msg="connecting to shim 69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b" address="unix:///run/containerd/s/004153cfecdf24afaa30454be1a48041249a398c699ef8d399ac3e59b46ba546" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:00.846154 systemd[1]: Started cri-containerd-69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b.scope - libcontainer container 69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b. Sep 12 05:49:00.871754 containerd[1558]: time="2025-09-12T05:49:00.871700165Z" level=info msg="connecting to shim d116907893baa71bcb9a3e5c3dc16ad02d1f829143befeb1e36e4353306b92de" address="unix:///run/containerd/s/57d84b0dd92abf35b641567ab7b0a002912c5ca5c868fbcf9d2d19efb111d38e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:00.877476 containerd[1558]: time="2025-09-12T05:49:00.877290176Z" level=info msg="connecting to shim 015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239" address="unix:///run/containerd/s/64b79d69564e46904b956ddf15bb53f424aba87d4263755736fc5d21e9fe1bcb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:00.883574 containerd[1558]: time="2025-09-12T05:49:00.883552855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhkkt,Uid:782d0b1e-f95d-4104-bd6f-40c00ecd3c54,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\"" Sep 12 05:49:00.885052 kubelet[2741]: E0912 05:49:00.884803 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:00.886824 containerd[1558]: time="2025-09-12T05:49:00.886801884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 05:49:00.910308 systemd[1]: Started cri-containerd-d116907893baa71bcb9a3e5c3dc16ad02d1f829143befeb1e36e4353306b92de.scope - libcontainer container d116907893baa71bcb9a3e5c3dc16ad02d1f829143befeb1e36e4353306b92de. Sep 12 05:49:00.914622 systemd[1]: Started cri-containerd-015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239.scope - libcontainer container 015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239. Sep 12 05:49:00.938956 containerd[1558]: time="2025-09-12T05:49:00.938907929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqvb7,Uid:da55031c-7fbd-45c8-a2c7-650062a3c736,Namespace:kube-system,Attempt:0,} returns sandbox id \"d116907893baa71bcb9a3e5c3dc16ad02d1f829143befeb1e36e4353306b92de\"" Sep 12 05:49:00.939867 kubelet[2741]: E0912 05:49:00.939661 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:00.946205 containerd[1558]: time="2025-09-12T05:49:00.946162836Z" level=info msg="CreateContainer within sandbox \"d116907893baa71bcb9a3e5c3dc16ad02d1f829143befeb1e36e4353306b92de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 05:49:00.958733 containerd[1558]: time="2025-09-12T05:49:00.957642077Z" level=info msg="Container 47fabb2aa00b68891e0cbe6b8abf969eca2889d47ea27f9e37621814860601ab: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:00.969107 containerd[1558]: time="2025-09-12T05:49:00.969050003Z" level=info msg="CreateContainer within sandbox \"d116907893baa71bcb9a3e5c3dc16ad02d1f829143befeb1e36e4353306b92de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47fabb2aa00b68891e0cbe6b8abf969eca2889d47ea27f9e37621814860601ab\"" Sep 12 05:49:00.970109 containerd[1558]: time="2025-09-12T05:49:00.970080523Z" level=info msg="StartContainer for \"47fabb2aa00b68891e0cbe6b8abf969eca2889d47ea27f9e37621814860601ab\"" Sep 12 05:49:00.971113 containerd[1558]: time="2025-09-12T05:49:00.971075776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5gzdq,Uid:c03efb06-e120-4665-9773-3851bdcb9833,Namespace:kube-system,Attempt:0,} returns sandbox id \"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\"" Sep 12 05:49:00.971505 containerd[1558]: time="2025-09-12T05:49:00.971479063Z" level=info msg="connecting to shim 47fabb2aa00b68891e0cbe6b8abf969eca2889d47ea27f9e37621814860601ab" address="unix:///run/containerd/s/57d84b0dd92abf35b641567ab7b0a002912c5ca5c868fbcf9d2d19efb111d38e" protocol=ttrpc version=3 Sep 12 05:49:00.971761 kubelet[2741]: E0912 05:49:00.971738 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:00.995146 systemd[1]: Started cri-containerd-47fabb2aa00b68891e0cbe6b8abf969eca2889d47ea27f9e37621814860601ab.scope - libcontainer container 47fabb2aa00b68891e0cbe6b8abf969eca2889d47ea27f9e37621814860601ab. Sep 12 05:49:01.039102 containerd[1558]: time="2025-09-12T05:49:01.039049524Z" level=info msg="StartContainer for \"47fabb2aa00b68891e0cbe6b8abf969eca2889d47ea27f9e37621814860601ab\" returns successfully" Sep 12 05:49:01.683540 kubelet[2741]: E0912 05:49:01.683504 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:02.006931 kubelet[2741]: E0912 05:49:02.006757 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:02.018664 kubelet[2741]: I0912 05:49:02.018587 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dqvb7" podStartSLOduration=2.018562343 podStartE2EDuration="2.018562343s" podCreationTimestamp="2025-09-12 05:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:49:01.74891274 +0000 UTC m=+7.205366638" watchObservedRunningTime="2025-09-12 05:49:02.018562343 +0000 UTC m=+7.475016261" Sep 12 05:49:02.686716 kubelet[2741]: E0912 05:49:02.686660 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:04.228495 kubelet[2741]: E0912 05:49:04.228445 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:04.689848 kubelet[2741]: E0912 05:49:04.689809 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:04.902140 update_engine[1542]: I20250912 05:49:04.902065 1542 update_attempter.cc:509] Updating boot flags... Sep 12 05:49:05.638527 kubelet[2741]: E0912 05:49:05.638481 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:06.502039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204904908.mount: Deactivated successfully. Sep 12 05:49:09.216495 containerd[1558]: time="2025-09-12T05:49:09.216398225Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:09.217181 containerd[1558]: time="2025-09-12T05:49:09.217120751Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 05:49:09.218574 containerd[1558]: time="2025-09-12T05:49:09.218529874Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:09.219923 containerd[1558]: time="2025-09-12T05:49:09.219837696Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.332875225s" Sep 12 05:49:09.219923 containerd[1558]: time="2025-09-12T05:49:09.219881108Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 05:49:09.221537 containerd[1558]: time="2025-09-12T05:49:09.221008710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 05:49:09.225409 containerd[1558]: time="2025-09-12T05:49:09.225352670Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 05:49:09.235454 containerd[1558]: time="2025-09-12T05:49:09.235395453Z" level=info msg="Container 53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:09.239676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363073260.mount: Deactivated successfully. Sep 12 05:49:09.243434 containerd[1558]: time="2025-09-12T05:49:09.243341662Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\"" Sep 12 05:49:09.244050 containerd[1558]: time="2025-09-12T05:49:09.243988044Z" level=info msg="StartContainer for \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\"" Sep 12 05:49:09.245071 containerd[1558]: time="2025-09-12T05:49:09.245042537Z" level=info msg="connecting to shim 53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c" address="unix:///run/containerd/s/004153cfecdf24afaa30454be1a48041249a398c699ef8d399ac3e59b46ba546" protocol=ttrpc version=3 Sep 12 05:49:09.268155 systemd[1]: Started cri-containerd-53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c.scope - libcontainer container 53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c. Sep 12 05:49:09.305683 containerd[1558]: time="2025-09-12T05:49:09.305640465Z" level=info msg="StartContainer for \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" returns successfully" Sep 12 05:49:09.319114 systemd[1]: cri-containerd-53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c.scope: Deactivated successfully. Sep 12 05:49:09.320903 containerd[1558]: time="2025-09-12T05:49:09.320858039Z" level=info msg="received exit event container_id:\"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" id:\"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" pid:3183 exited_at:{seconds:1757656149 nanos:320341012}" Sep 12 05:49:09.321321 containerd[1558]: time="2025-09-12T05:49:09.321273103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" id:\"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" pid:3183 exited_at:{seconds:1757656149 nanos:320341012}" Sep 12 05:49:09.343652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c-rootfs.mount: Deactivated successfully. Sep 12 05:49:09.699364 kubelet[2741]: E0912 05:49:09.699316 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:09.704572 containerd[1558]: time="2025-09-12T05:49:09.704486861Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 05:49:09.717637 containerd[1558]: time="2025-09-12T05:49:09.717589016Z" level=info msg="Container bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:09.725933 containerd[1558]: time="2025-09-12T05:49:09.725866522Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\"" Sep 12 05:49:09.726378 containerd[1558]: time="2025-09-12T05:49:09.726354885Z" level=info msg="StartContainer for \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\"" Sep 12 05:49:09.728317 containerd[1558]: time="2025-09-12T05:49:09.728266249Z" level=info msg="connecting to shim bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc" address="unix:///run/containerd/s/004153cfecdf24afaa30454be1a48041249a398c699ef8d399ac3e59b46ba546" protocol=ttrpc version=3 Sep 12 05:49:09.752187 systemd[1]: Started cri-containerd-bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc.scope - libcontainer container bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc. Sep 12 05:49:09.783668 containerd[1558]: time="2025-09-12T05:49:09.783621989Z" level=info msg="StartContainer for \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" returns successfully" Sep 12 05:49:09.800381 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 05:49:09.801620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:49:09.801801 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:49:09.803312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:49:09.804420 systemd[1]: cri-containerd-bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc.scope: Deactivated successfully. Sep 12 05:49:09.804734 systemd[1]: cri-containerd-bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc.scope: Consumed 27ms CPU time, 6.2M memory peak, 2.2M written to disk. Sep 12 05:49:09.805949 containerd[1558]: time="2025-09-12T05:49:09.805914274Z" level=info msg="received exit event container_id:\"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" id:\"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" pid:3231 exited_at:{seconds:1757656149 nanos:805530629}" Sep 12 05:49:09.806131 containerd[1558]: time="2025-09-12T05:49:09.806048077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" id:\"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" pid:3231 exited_at:{seconds:1757656149 nanos:805530629}" Sep 12 05:49:09.824434 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:49:10.703800 kubelet[2741]: E0912 05:49:10.703709 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:10.710858 containerd[1558]: time="2025-09-12T05:49:10.710797109Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 05:49:10.729216 containerd[1558]: time="2025-09-12T05:49:10.729164809Z" level=info msg="Container 9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:10.738753 containerd[1558]: time="2025-09-12T05:49:10.738686898Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\"" Sep 12 05:49:10.739283 containerd[1558]: time="2025-09-12T05:49:10.739255583Z" level=info msg="StartContainer for \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\"" Sep 12 05:49:10.740688 containerd[1558]: time="2025-09-12T05:49:10.740664213Z" level=info msg="connecting to shim 9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b" address="unix:///run/containerd/s/004153cfecdf24afaa30454be1a48041249a398c699ef8d399ac3e59b46ba546" protocol=ttrpc version=3 Sep 12 05:49:10.768420 systemd[1]: Started cri-containerd-9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b.scope - libcontainer container 9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b. Sep 12 05:49:10.817790 systemd[1]: cri-containerd-9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b.scope: Deactivated successfully. Sep 12 05:49:10.818957 containerd[1558]: time="2025-09-12T05:49:10.818904066Z" level=info msg="received exit event container_id:\"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" id:\"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" pid:3278 exited_at:{seconds:1757656150 nanos:818618216}" Sep 12 05:49:10.819264 containerd[1558]: time="2025-09-12T05:49:10.819218339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" id:\"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" pid:3278 exited_at:{seconds:1757656150 nanos:818618216}" Sep 12 05:49:10.819888 containerd[1558]: time="2025-09-12T05:49:10.819851987Z" level=info msg="StartContainer for \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" returns successfully" Sep 12 05:49:11.237248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b-rootfs.mount: Deactivated successfully. Sep 12 05:49:11.650769 containerd[1558]: time="2025-09-12T05:49:11.650703984Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:11.651585 containerd[1558]: time="2025-09-12T05:49:11.651559581Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 05:49:11.652845 containerd[1558]: time="2025-09-12T05:49:11.652766459Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:11.654157 containerd[1558]: time="2025-09-12T05:49:11.654122961Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.433083042s" Sep 12 05:49:11.654157 containerd[1558]: time="2025-09-12T05:49:11.654153668Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 05:49:11.659162 containerd[1558]: time="2025-09-12T05:49:11.659133833Z" level=info msg="CreateContainer within sandbox \"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 05:49:11.667822 containerd[1558]: time="2025-09-12T05:49:11.667781112Z" level=info msg="Container c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:11.675215 containerd[1558]: time="2025-09-12T05:49:11.675171738Z" level=info msg="CreateContainer within sandbox \"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\"" Sep 12 05:49:11.675814 containerd[1558]: time="2025-09-12T05:49:11.675765199Z" level=info msg="StartContainer for \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\"" Sep 12 05:49:11.677054 containerd[1558]: time="2025-09-12T05:49:11.676984632Z" level=info msg="connecting to shim c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d" address="unix:///run/containerd/s/64b79d69564e46904b956ddf15bb53f424aba87d4263755736fc5d21e9fe1bcb" protocol=ttrpc version=3 Sep 12 05:49:11.697206 systemd[1]: Started cri-containerd-c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d.scope - libcontainer container c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d. Sep 12 05:49:11.711693 kubelet[2741]: E0912 05:49:11.711640 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:11.732031 containerd[1558]: time="2025-09-12T05:49:11.730408184Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 05:49:11.741889 containerd[1558]: time="2025-09-12T05:49:11.741834621Z" level=info msg="StartContainer for \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" returns successfully" Sep 12 05:49:11.750703 containerd[1558]: time="2025-09-12T05:49:11.750020709Z" level=info msg="Container ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:11.757402 containerd[1558]: time="2025-09-12T05:49:11.757333218Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\"" Sep 12 05:49:11.759392 containerd[1558]: time="2025-09-12T05:49:11.759139519Z" level=info msg="StartContainer for \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\"" Sep 12 05:49:11.760184 containerd[1558]: time="2025-09-12T05:49:11.760150347Z" level=info msg="connecting to shim ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533" address="unix:///run/containerd/s/004153cfecdf24afaa30454be1a48041249a398c699ef8d399ac3e59b46ba546" protocol=ttrpc version=3 Sep 12 05:49:11.785221 systemd[1]: Started cri-containerd-ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533.scope - libcontainer container ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533. Sep 12 05:49:11.822615 containerd[1558]: time="2025-09-12T05:49:11.822566562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" id:\"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" pid:3366 exited_at:{seconds:1757656151 nanos:822253040}" Sep 12 05:49:11.823240 systemd[1]: cri-containerd-ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533.scope: Deactivated successfully. Sep 12 05:49:11.827034 containerd[1558]: time="2025-09-12T05:49:11.825693176Z" level=info msg="received exit event container_id:\"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" id:\"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" pid:3366 exited_at:{seconds:1757656151 nanos:822253040}" Sep 12 05:49:11.843334 containerd[1558]: time="2025-09-12T05:49:11.843272101Z" level=info msg="StartContainer for \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" returns successfully" Sep 12 05:49:12.237017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854094235.mount: Deactivated successfully. Sep 12 05:49:12.715936 kubelet[2741]: E0912 05:49:12.715747 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:12.719551 kubelet[2741]: E0912 05:49:12.719421 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:12.726726 containerd[1558]: time="2025-09-12T05:49:12.726666471Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 05:49:12.750519 kubelet[2741]: I0912 05:49:12.750450 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5gzdq" podStartSLOduration=2.067696488 podStartE2EDuration="12.750407613s" podCreationTimestamp="2025-09-12 05:49:00 +0000 UTC" firstStartedPulling="2025-09-12 05:49:00.972304302 +0000 UTC m=+6.428758200" lastFinishedPulling="2025-09-12 05:49:11.655015437 +0000 UTC m=+17.111469325" observedRunningTime="2025-09-12 05:49:12.729894474 +0000 UTC m=+18.186348372" watchObservedRunningTime="2025-09-12 05:49:12.750407613 +0000 UTC m=+18.206861501" Sep 12 05:49:12.753242 containerd[1558]: time="2025-09-12T05:49:12.753202088Z" level=info msg="Container f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:12.755125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398392822.mount: Deactivated successfully. Sep 12 05:49:12.768175 containerd[1558]: time="2025-09-12T05:49:12.768129884Z" level=info msg="CreateContainer within sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\"" Sep 12 05:49:12.768642 containerd[1558]: time="2025-09-12T05:49:12.768611904Z" level=info msg="StartContainer for \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\"" Sep 12 05:49:12.769677 containerd[1558]: time="2025-09-12T05:49:12.769647960Z" level=info msg="connecting to shim f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8" address="unix:///run/containerd/s/004153cfecdf24afaa30454be1a48041249a398c699ef8d399ac3e59b46ba546" protocol=ttrpc version=3 Sep 12 05:49:12.794156 systemd[1]: Started cri-containerd-f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8.scope - libcontainer container f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8. Sep 12 05:49:12.837056 containerd[1558]: time="2025-09-12T05:49:12.836993628Z" level=info msg="StartContainer for \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" returns successfully" Sep 12 05:49:12.907479 containerd[1558]: time="2025-09-12T05:49:12.907421633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" id:\"62196780f24ac110a4321513a9847d144492e8c329e2388bda5111d0f489669f\" pid:3435 exited_at:{seconds:1757656152 nanos:907116558}" Sep 12 05:49:12.996382 kubelet[2741]: I0912 05:49:12.996267 2741 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 05:49:13.050390 systemd[1]: Created slice kubepods-burstable-podcec7e3b7_e1c2_48a4_804b_e5411c5eadf5.slice - libcontainer container kubepods-burstable-podcec7e3b7_e1c2_48a4_804b_e5411c5eadf5.slice. Sep 12 05:49:13.057490 kubelet[2741]: I0912 05:49:13.056934 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dgch\" (UniqueName: \"kubernetes.io/projected/cec7e3b7-e1c2-48a4-804b-e5411c5eadf5-kube-api-access-9dgch\") pod \"coredns-674b8bbfcf-mwqnr\" (UID: \"cec7e3b7-e1c2-48a4-804b-e5411c5eadf5\") " pod="kube-system/coredns-674b8bbfcf-mwqnr" Sep 12 05:49:13.057787 kubelet[2741]: I0912 05:49:13.057629 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pjj\" (UniqueName: \"kubernetes.io/projected/4f8a8d15-b2dd-4f1f-a4d0-86510eba54a3-kube-api-access-62pjj\") pod \"coredns-674b8bbfcf-hxbtj\" (UID: \"4f8a8d15-b2dd-4f1f-a4d0-86510eba54a3\") " pod="kube-system/coredns-674b8bbfcf-hxbtj" Sep 12 05:49:13.057787 kubelet[2741]: I0912 05:49:13.057658 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f8a8d15-b2dd-4f1f-a4d0-86510eba54a3-config-volume\") pod \"coredns-674b8bbfcf-hxbtj\" (UID: \"4f8a8d15-b2dd-4f1f-a4d0-86510eba54a3\") " pod="kube-system/coredns-674b8bbfcf-hxbtj" Sep 12 05:49:13.057787 kubelet[2741]: I0912 05:49:13.057678 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec7e3b7-e1c2-48a4-804b-e5411c5eadf5-config-volume\") pod \"coredns-674b8bbfcf-mwqnr\" (UID: \"cec7e3b7-e1c2-48a4-804b-e5411c5eadf5\") " pod="kube-system/coredns-674b8bbfcf-mwqnr" Sep 12 05:49:13.060628 systemd[1]: Created slice kubepods-burstable-pod4f8a8d15_b2dd_4f1f_a4d0_86510eba54a3.slice - libcontainer container kubepods-burstable-pod4f8a8d15_b2dd_4f1f_a4d0_86510eba54a3.slice. Sep 12 05:49:13.357124 kubelet[2741]: E0912 05:49:13.357081 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:13.357720 containerd[1558]: time="2025-09-12T05:49:13.357671476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwqnr,Uid:cec7e3b7-e1c2-48a4-804b-e5411c5eadf5,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:13.363862 kubelet[2741]: E0912 05:49:13.363798 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:13.364471 containerd[1558]: time="2025-09-12T05:49:13.364434136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hxbtj,Uid:4f8a8d15-b2dd-4f1f-a4d0-86510eba54a3,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:13.727090 kubelet[2741]: E0912 05:49:13.726683 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:13.727090 kubelet[2741]: E0912 05:49:13.726990 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:13.740796 kubelet[2741]: I0912 05:49:13.740740 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fhkkt" podStartSLOduration=5.405700393 podStartE2EDuration="13.740727288s" podCreationTimestamp="2025-09-12 05:49:00 +0000 UTC" firstStartedPulling="2025-09-12 05:49:00.88582137 +0000 UTC m=+6.342275269" lastFinishedPulling="2025-09-12 05:49:09.220848256 +0000 UTC m=+14.677302164" observedRunningTime="2025-09-12 05:49:13.740712309 +0000 UTC m=+19.197166207" watchObservedRunningTime="2025-09-12 05:49:13.740727288 +0000 UTC m=+19.197181186" Sep 12 05:49:14.728617 kubelet[2741]: E0912 05:49:14.728232 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:15.087975 systemd-networkd[1487]: cilium_host: Link UP Sep 12 05:49:15.088165 systemd-networkd[1487]: cilium_net: Link UP Sep 12 05:49:15.088353 systemd-networkd[1487]: cilium_net: Gained carrier Sep 12 05:49:15.088537 systemd-networkd[1487]: cilium_host: Gained carrier Sep 12 05:49:15.122233 systemd-networkd[1487]: cilium_host: Gained IPv6LL Sep 12 05:49:15.200463 systemd-networkd[1487]: cilium_vxlan: Link UP Sep 12 05:49:15.200473 systemd-networkd[1487]: cilium_vxlan: Gained carrier Sep 12 05:49:15.429030 kernel: NET: Registered PF_ALG protocol family Sep 12 05:49:15.729735 kubelet[2741]: E0912 05:49:15.729679 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:15.976511 systemd-networkd[1487]: cilium_net: Gained IPv6LL Sep 12 05:49:16.188764 systemd-networkd[1487]: lxc_health: Link UP Sep 12 05:49:16.190089 systemd-networkd[1487]: lxc_health: Gained carrier Sep 12 05:49:16.419103 systemd-networkd[1487]: lxca06d52759cea: Link UP Sep 12 05:49:16.420028 kernel: eth0: renamed from tmpe3b2f Sep 12 05:49:16.420632 systemd-networkd[1487]: lxca06d52759cea: Gained carrier Sep 12 05:49:16.421228 systemd-networkd[1487]: lxcb2b665d1792b: Link UP Sep 12 05:49:16.430043 kernel: eth0: renamed from tmpf051b Sep 12 05:49:16.435407 systemd-networkd[1487]: lxcb2b665d1792b: Gained carrier Sep 12 05:49:16.732068 kubelet[2741]: E0912 05:49:16.732017 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:16.808194 systemd-networkd[1487]: cilium_vxlan: Gained IPv6LL Sep 12 05:49:17.576211 systemd-networkd[1487]: lxc_health: Gained IPv6LL Sep 12 05:49:17.640213 systemd-networkd[1487]: lxca06d52759cea: Gained IPv6LL Sep 12 05:49:17.734155 kubelet[2741]: E0912 05:49:17.734110 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:18.088180 systemd-networkd[1487]: lxcb2b665d1792b: Gained IPv6LL Sep 12 05:49:18.736554 kubelet[2741]: E0912 05:49:18.736513 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:19.929570 containerd[1558]: time="2025-09-12T05:49:19.929496685Z" level=info msg="connecting to shim f051b11a72227e4d91a6caa4090eff658bf42ee3f6fa58fc6af171c2867a475a" address="unix:///run/containerd/s/b0beb045e6c357dadc9645dde529af1c6f72c825e5818de426387ed7821115ad" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:19.931903 containerd[1558]: time="2025-09-12T05:49:19.931835209Z" level=info msg="connecting to shim e3b2fba0796765b93a1f5e299a635d012def60e1a7b0eddd7aa5ea01dc6015b3" address="unix:///run/containerd/s/b43ffe548d59f41b0c80d8f466c4b28970c13149c67b860921632131bd1ff074" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:19.967282 systemd[1]: Started cri-containerd-e3b2fba0796765b93a1f5e299a635d012def60e1a7b0eddd7aa5ea01dc6015b3.scope - libcontainer container e3b2fba0796765b93a1f5e299a635d012def60e1a7b0eddd7aa5ea01dc6015b3. Sep 12 05:49:19.973281 systemd[1]: Started cri-containerd-f051b11a72227e4d91a6caa4090eff658bf42ee3f6fa58fc6af171c2867a475a.scope - libcontainer container f051b11a72227e4d91a6caa4090eff658bf42ee3f6fa58fc6af171c2867a475a. Sep 12 05:49:19.985657 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:19.990191 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:20.032093 containerd[1558]: time="2025-09-12T05:49:20.032046332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hxbtj,Uid:4f8a8d15-b2dd-4f1f-a4d0-86510eba54a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f051b11a72227e4d91a6caa4090eff658bf42ee3f6fa58fc6af171c2867a475a\"" Sep 12 05:49:20.033262 kubelet[2741]: E0912 05:49:20.033224 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:20.036800 containerd[1558]: time="2025-09-12T05:49:20.036758083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwqnr,Uid:cec7e3b7-e1c2-48a4-804b-e5411c5eadf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3b2fba0796765b93a1f5e299a635d012def60e1a7b0eddd7aa5ea01dc6015b3\"" Sep 12 05:49:20.037817 kubelet[2741]: E0912 05:49:20.037790 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:20.039402 containerd[1558]: time="2025-09-12T05:49:20.039343640Z" level=info msg="CreateContainer within sandbox \"f051b11a72227e4d91a6caa4090eff658bf42ee3f6fa58fc6af171c2867a475a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:49:20.045349 containerd[1558]: time="2025-09-12T05:49:20.045321314Z" level=info msg="CreateContainer within sandbox \"e3b2fba0796765b93a1f5e299a635d012def60e1a7b0eddd7aa5ea01dc6015b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:49:20.057991 containerd[1558]: time="2025-09-12T05:49:20.057935933Z" level=info msg="Container c561c077a4ccaa4e92b6581018dc95f6da0754b5616cfcee97a553b2a602a80f: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:20.060058 containerd[1558]: time="2025-09-12T05:49:20.060020868Z" level=info msg="Container d0ec8678f438e1672b29e997e7ab6decc6005f86b804aa943092b6a8ad65b4af: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:20.069923 containerd[1558]: time="2025-09-12T05:49:20.069864210Z" level=info msg="CreateContainer within sandbox \"f051b11a72227e4d91a6caa4090eff658bf42ee3f6fa58fc6af171c2867a475a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0ec8678f438e1672b29e997e7ab6decc6005f86b804aa943092b6a8ad65b4af\"" Sep 12 05:49:20.070534 containerd[1558]: time="2025-09-12T05:49:20.070486170Z" level=info msg="StartContainer for \"d0ec8678f438e1672b29e997e7ab6decc6005f86b804aa943092b6a8ad65b4af\"" Sep 12 05:49:20.071464 containerd[1558]: time="2025-09-12T05:49:20.071427853Z" level=info msg="connecting to shim d0ec8678f438e1672b29e997e7ab6decc6005f86b804aa943092b6a8ad65b4af" address="unix:///run/containerd/s/b0beb045e6c357dadc9645dde529af1c6f72c825e5818de426387ed7821115ad" protocol=ttrpc version=3 Sep 12 05:49:20.072540 containerd[1558]: time="2025-09-12T05:49:20.072505101Z" level=info msg="CreateContainer within sandbox \"e3b2fba0796765b93a1f5e299a635d012def60e1a7b0eddd7aa5ea01dc6015b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c561c077a4ccaa4e92b6581018dc95f6da0754b5616cfcee97a553b2a602a80f\"" Sep 12 05:49:20.073128 containerd[1558]: time="2025-09-12T05:49:20.073098959Z" level=info msg="StartContainer for \"c561c077a4ccaa4e92b6581018dc95f6da0754b5616cfcee97a553b2a602a80f\"" Sep 12 05:49:20.074252 containerd[1558]: time="2025-09-12T05:49:20.074231481Z" level=info msg="connecting to shim c561c077a4ccaa4e92b6581018dc95f6da0754b5616cfcee97a553b2a602a80f" address="unix:///run/containerd/s/b43ffe548d59f41b0c80d8f466c4b28970c13149c67b860921632131bd1ff074" protocol=ttrpc version=3 Sep 12 05:49:20.093129 systemd[1]: Started cri-containerd-d0ec8678f438e1672b29e997e7ab6decc6005f86b804aa943092b6a8ad65b4af.scope - libcontainer container d0ec8678f438e1672b29e997e7ab6decc6005f86b804aa943092b6a8ad65b4af. Sep 12 05:49:20.095765 systemd[1]: Started cri-containerd-c561c077a4ccaa4e92b6581018dc95f6da0754b5616cfcee97a553b2a602a80f.scope - libcontainer container c561c077a4ccaa4e92b6581018dc95f6da0754b5616cfcee97a553b2a602a80f. Sep 12 05:49:20.135279 containerd[1558]: time="2025-09-12T05:49:20.135148565Z" level=info msg="StartContainer for \"d0ec8678f438e1672b29e997e7ab6decc6005f86b804aa943092b6a8ad65b4af\" returns successfully" Sep 12 05:49:20.135601 containerd[1558]: time="2025-09-12T05:49:20.135571571Z" level=info msg="StartContainer for \"c561c077a4ccaa4e92b6581018dc95f6da0754b5616cfcee97a553b2a602a80f\" returns successfully" Sep 12 05:49:20.744664 kubelet[2741]: E0912 05:49:20.744623 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:20.746953 kubelet[2741]: E0912 05:49:20.746872 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:20.793465 kubelet[2741]: I0912 05:49:20.793375 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hxbtj" podStartSLOduration=20.793357097 podStartE2EDuration="20.793357097s" podCreationTimestamp="2025-09-12 05:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:49:20.792382112 +0000 UTC m=+26.248836010" watchObservedRunningTime="2025-09-12 05:49:20.793357097 +0000 UTC m=+26.249810995" Sep 12 05:49:21.749263 kubelet[2741]: E0912 05:49:21.748681 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:21.749263 kubelet[2741]: E0912 05:49:21.749125 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:21.760771 kubelet[2741]: I0912 05:49:21.760649 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mwqnr" podStartSLOduration=21.760627621 podStartE2EDuration="21.760627621s" podCreationTimestamp="2025-09-12 05:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:49:20.882639552 +0000 UTC m=+26.339093450" watchObservedRunningTime="2025-09-12 05:49:21.760627621 +0000 UTC m=+27.217081519" Sep 12 05:49:22.750140 kubelet[2741]: E0912 05:49:22.750095 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:22.750140 kubelet[2741]: E0912 05:49:22.750095 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:23.564649 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:48286.service - OpenSSH per-connection server daemon (10.0.0.1:48286). Sep 12 05:49:23.640980 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 48286 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:23.642955 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:23.649304 systemd-logind[1540]: New session 8 of user core. Sep 12 05:49:23.659154 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 05:49:23.836474 sshd[4082]: Connection closed by 10.0.0.1 port 48286 Sep 12 05:49:23.836778 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:23.841103 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:48286.service: Deactivated successfully. Sep 12 05:49:23.843169 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 05:49:23.844150 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Sep 12 05:49:23.845605 systemd-logind[1540]: Removed session 8. Sep 12 05:49:28.857430 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:48292.service - OpenSSH per-connection server daemon (10.0.0.1:48292). Sep 12 05:49:28.917467 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 48292 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:28.919295 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:28.924114 systemd-logind[1540]: New session 9 of user core. Sep 12 05:49:28.938253 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 05:49:29.074135 sshd[4100]: Connection closed by 10.0.0.1 port 48292 Sep 12 05:49:29.074521 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:29.078718 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:48292.service: Deactivated successfully. Sep 12 05:49:29.080848 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 05:49:29.081803 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Sep 12 05:49:29.083020 systemd-logind[1540]: Removed session 9. Sep 12 05:49:34.089673 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:45516.service - OpenSSH per-connection server daemon (10.0.0.1:45516). Sep 12 05:49:34.156494 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 45516 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:34.157969 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:34.162228 systemd-logind[1540]: New session 10 of user core. Sep 12 05:49:34.170140 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 05:49:34.281344 sshd[4120]: Connection closed by 10.0.0.1 port 45516 Sep 12 05:49:34.281736 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:34.285647 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:45516.service: Deactivated successfully. Sep 12 05:49:34.287694 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 05:49:34.288464 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Sep 12 05:49:34.289701 systemd-logind[1540]: Removed session 10. Sep 12 05:49:39.298138 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:45520.service - OpenSSH per-connection server daemon (10.0.0.1:45520). Sep 12 05:49:39.358395 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 45520 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:39.360294 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:39.365438 systemd-logind[1540]: New session 11 of user core. Sep 12 05:49:39.378281 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 05:49:39.491767 sshd[4137]: Connection closed by 10.0.0.1 port 45520 Sep 12 05:49:39.492277 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:39.503228 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:45520.service: Deactivated successfully. Sep 12 05:49:39.505934 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 05:49:39.506965 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Sep 12 05:49:39.511130 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:45530.service - OpenSSH per-connection server daemon (10.0.0.1:45530). Sep 12 05:49:39.512104 systemd-logind[1540]: Removed session 11. Sep 12 05:49:39.570710 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 45530 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:39.572757 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:39.578269 systemd-logind[1540]: New session 12 of user core. Sep 12 05:49:39.592282 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 05:49:39.749518 sshd[4155]: Connection closed by 10.0.0.1 port 45530 Sep 12 05:49:39.749944 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:39.763058 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:45530.service: Deactivated successfully. Sep 12 05:49:39.765660 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 05:49:39.767904 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Sep 12 05:49:39.771766 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:45536.service - OpenSSH per-connection server daemon (10.0.0.1:45536). Sep 12 05:49:39.773706 systemd-logind[1540]: Removed session 12. Sep 12 05:49:39.831104 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 45536 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:39.832839 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:39.837753 systemd-logind[1540]: New session 13 of user core. Sep 12 05:49:39.848199 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 05:49:39.962022 sshd[4170]: Connection closed by 10.0.0.1 port 45536 Sep 12 05:49:39.962391 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:39.965787 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:45536.service: Deactivated successfully. Sep 12 05:49:39.967909 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 05:49:39.969673 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Sep 12 05:49:39.971059 systemd-logind[1540]: Removed session 13. Sep 12 05:49:44.986714 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:33792.service - OpenSSH per-connection server daemon (10.0.0.1:33792). Sep 12 05:49:45.044888 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 33792 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:45.046883 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:45.051717 systemd-logind[1540]: New session 14 of user core. Sep 12 05:49:45.061193 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 05:49:45.175774 sshd[4187]: Connection closed by 10.0.0.1 port 33792 Sep 12 05:49:45.176203 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:45.181238 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:33792.service: Deactivated successfully. Sep 12 05:49:45.183571 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 05:49:45.184814 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Sep 12 05:49:45.186111 systemd-logind[1540]: Removed session 14. Sep 12 05:49:50.193883 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:47230.service - OpenSSH per-connection server daemon (10.0.0.1:47230). Sep 12 05:49:50.254077 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 47230 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:50.255851 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:50.260856 systemd-logind[1540]: New session 15 of user core. Sep 12 05:49:50.268168 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 05:49:50.383861 sshd[4203]: Connection closed by 10.0.0.1 port 47230 Sep 12 05:49:50.384276 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:50.388656 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:47230.service: Deactivated successfully. Sep 12 05:49:50.390838 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 05:49:50.391898 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Sep 12 05:49:50.393706 systemd-logind[1540]: Removed session 15. Sep 12 05:49:55.401364 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:47236.service - OpenSSH per-connection server daemon (10.0.0.1:47236). Sep 12 05:49:55.484629 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 47236 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:55.486730 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:55.491308 systemd-logind[1540]: New session 16 of user core. Sep 12 05:49:55.506275 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 05:49:55.623287 sshd[4221]: Connection closed by 10.0.0.1 port 47236 Sep 12 05:49:55.623697 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:55.637433 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:47236.service: Deactivated successfully. Sep 12 05:49:55.639569 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 05:49:55.640292 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Sep 12 05:49:55.643528 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:47244.service - OpenSSH per-connection server daemon (10.0.0.1:47244). Sep 12 05:49:55.644325 systemd-logind[1540]: Removed session 16. Sep 12 05:49:55.704550 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 47244 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:55.706494 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:55.711311 systemd-logind[1540]: New session 17 of user core. Sep 12 05:49:55.720180 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 05:49:56.026792 sshd[4238]: Connection closed by 10.0.0.1 port 47244 Sep 12 05:49:56.027259 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:56.040130 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:47244.service: Deactivated successfully. Sep 12 05:49:56.042322 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 05:49:56.043262 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Sep 12 05:49:56.046803 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:47254.service - OpenSSH per-connection server daemon (10.0.0.1:47254). Sep 12 05:49:56.047465 systemd-logind[1540]: Removed session 17. Sep 12 05:49:56.108474 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 47254 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:56.110430 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:56.116053 systemd-logind[1540]: New session 18 of user core. Sep 12 05:49:56.127193 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 05:49:56.652158 sshd[4252]: Connection closed by 10.0.0.1 port 47254 Sep 12 05:49:56.652594 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:56.665261 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:47254.service: Deactivated successfully. Sep 12 05:49:56.667447 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 05:49:56.668984 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Sep 12 05:49:56.672465 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:47270.service - OpenSSH per-connection server daemon (10.0.0.1:47270). Sep 12 05:49:56.674453 systemd-logind[1540]: Removed session 18. Sep 12 05:49:56.729572 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 47270 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:56.731062 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:56.735874 systemd-logind[1540]: New session 19 of user core. Sep 12 05:49:56.742146 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 05:49:56.972341 sshd[4273]: Connection closed by 10.0.0.1 port 47270 Sep 12 05:49:56.974786 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:56.985807 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:47270.service: Deactivated successfully. Sep 12 05:49:56.988152 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 05:49:56.989241 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Sep 12 05:49:56.992140 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:47274.service - OpenSSH per-connection server daemon (10.0.0.1:47274). Sep 12 05:49:56.992804 systemd-logind[1540]: Removed session 19. Sep 12 05:49:57.047481 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 47274 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:57.049281 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:57.054314 systemd-logind[1540]: New session 20 of user core. Sep 12 05:49:57.066217 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 05:49:57.180118 sshd[4287]: Connection closed by 10.0.0.1 port 47274 Sep 12 05:49:57.180519 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:57.185470 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:47274.service: Deactivated successfully. Sep 12 05:49:57.187659 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 05:49:57.188540 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Sep 12 05:49:57.189893 systemd-logind[1540]: Removed session 20. Sep 12 05:50:02.193769 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:60302.service - OpenSSH per-connection server daemon (10.0.0.1:60302). Sep 12 05:50:02.266333 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 60302 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:02.268399 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:02.274314 systemd-logind[1540]: New session 21 of user core. Sep 12 05:50:02.282188 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 05:50:02.395335 sshd[4307]: Connection closed by 10.0.0.1 port 60302 Sep 12 05:50:02.395742 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:02.399891 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:60302.service: Deactivated successfully. Sep 12 05:50:02.402587 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 05:50:02.404246 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Sep 12 05:50:02.405944 systemd-logind[1540]: Removed session 21. Sep 12 05:50:07.408960 systemd[1]: Started sshd@21-10.0.0.20:22-10.0.0.1:60310.service - OpenSSH per-connection server daemon (10.0.0.1:60310). Sep 12 05:50:07.465778 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 60310 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:07.467363 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:07.472284 systemd-logind[1540]: New session 22 of user core. Sep 12 05:50:07.483178 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 05:50:07.588582 sshd[4323]: Connection closed by 10.0.0.1 port 60310 Sep 12 05:50:07.588988 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:07.594138 systemd[1]: sshd@21-10.0.0.20:22-10.0.0.1:60310.service: Deactivated successfully. Sep 12 05:50:07.596253 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 05:50:07.597028 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Sep 12 05:50:07.598214 systemd-logind[1540]: Removed session 22. Sep 12 05:50:07.654098 kubelet[2741]: E0912 05:50:07.654031 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:12.602210 systemd[1]: Started sshd@22-10.0.0.20:22-10.0.0.1:38408.service - OpenSSH per-connection server daemon (10.0.0.1:38408). Sep 12 05:50:12.662911 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 38408 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:12.664736 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:12.669455 systemd-logind[1540]: New session 23 of user core. Sep 12 05:50:12.677125 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 05:50:12.786414 sshd[4339]: Connection closed by 10.0.0.1 port 38408 Sep 12 05:50:12.786933 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:12.801047 systemd[1]: sshd@22-10.0.0.20:22-10.0.0.1:38408.service: Deactivated successfully. Sep 12 05:50:12.803201 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 05:50:12.803962 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Sep 12 05:50:12.807693 systemd[1]: Started sshd@23-10.0.0.20:22-10.0.0.1:38424.service - OpenSSH per-connection server daemon (10.0.0.1:38424). Sep 12 05:50:12.808486 systemd-logind[1540]: Removed session 23. Sep 12 05:50:12.866889 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 38424 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:12.868805 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:12.874246 systemd-logind[1540]: New session 24 of user core. Sep 12 05:50:12.888230 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 05:50:14.243970 containerd[1558]: time="2025-09-12T05:50:14.242987639Z" level=info msg="StopContainer for \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" with timeout 30 (s)" Sep 12 05:50:14.252293 containerd[1558]: time="2025-09-12T05:50:14.252266599Z" level=info msg="Stop container \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" with signal terminated" Sep 12 05:50:14.265524 systemd[1]: cri-containerd-c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d.scope: Deactivated successfully. Sep 12 05:50:14.267322 containerd[1558]: time="2025-09-12T05:50:14.267250564Z" level=info msg="received exit event container_id:\"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" id:\"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" pid:3333 exited_at:{seconds:1757656214 nanos:266913449}" Sep 12 05:50:14.267491 containerd[1558]: time="2025-09-12T05:50:14.267329585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" id:\"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" pid:3333 exited_at:{seconds:1757656214 nanos:266913449}" Sep 12 05:50:14.280834 containerd[1558]: time="2025-09-12T05:50:14.280787568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" id:\"41c3067aff6bd86d8be02cc30d5899fa22933a953894789771ed299ff9c22e05\" pid:4377 exited_at:{seconds:1757656214 nanos:280593006}" Sep 12 05:50:14.281936 containerd[1558]: time="2025-09-12T05:50:14.281814504Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 05:50:14.284125 containerd[1558]: time="2025-09-12T05:50:14.284067229Z" level=info msg="StopContainer for \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" with timeout 2 (s)" Sep 12 05:50:14.284354 containerd[1558]: time="2025-09-12T05:50:14.284330543Z" level=info msg="Stop container \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" with signal terminated" Sep 12 05:50:14.294603 systemd-networkd[1487]: lxc_health: Link DOWN Sep 12 05:50:14.294611 systemd-networkd[1487]: lxc_health: Lost carrier Sep 12 05:50:14.296378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d-rootfs.mount: Deactivated successfully. Sep 12 05:50:14.321559 systemd[1]: cri-containerd-f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8.scope: Deactivated successfully. Sep 12 05:50:14.322468 systemd[1]: cri-containerd-f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8.scope: Consumed 6.878s CPU time, 122.5M memory peak, 232K read from disk, 14.6M written to disk. Sep 12 05:50:14.323935 containerd[1558]: time="2025-09-12T05:50:14.323886275Z" level=info msg="StopContainer for \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" returns successfully" Sep 12 05:50:14.324124 containerd[1558]: time="2025-09-12T05:50:14.323976618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" id:\"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" pid:3405 exited_at:{seconds:1757656214 nanos:323706641}" Sep 12 05:50:14.324124 containerd[1558]: time="2025-09-12T05:50:14.324033837Z" level=info msg="received exit event container_id:\"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" id:\"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" pid:3405 exited_at:{seconds:1757656214 nanos:323706641}" Sep 12 05:50:14.326913 containerd[1558]: time="2025-09-12T05:50:14.326878826Z" level=info msg="StopPodSandbox for \"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\"" Sep 12 05:50:14.335112 containerd[1558]: time="2025-09-12T05:50:14.335071986Z" level=info msg="Container to stop \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 05:50:14.342972 systemd[1]: cri-containerd-015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239.scope: Deactivated successfully. Sep 12 05:50:14.345115 containerd[1558]: time="2025-09-12T05:50:14.345065555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\" id:\"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\" pid:2944 exit_status:137 exited_at:{seconds:1757656214 nanos:343784972}" Sep 12 05:50:14.347914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8-rootfs.mount: Deactivated successfully. Sep 12 05:50:14.374747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239-rootfs.mount: Deactivated successfully. Sep 12 05:50:14.397835 containerd[1558]: time="2025-09-12T05:50:14.397794735Z" level=info msg="StopContainer for \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" returns successfully" Sep 12 05:50:14.398153 containerd[1558]: time="2025-09-12T05:50:14.397935825Z" level=info msg="shim disconnected" id=015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239 namespace=k8s.io Sep 12 05:50:14.398153 containerd[1558]: time="2025-09-12T05:50:14.398146499Z" level=warning msg="cleaning up after shim disconnected" id=015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239 namespace=k8s.io Sep 12 05:50:14.409534 containerd[1558]: time="2025-09-12T05:50:14.398170164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 05:50:14.409634 containerd[1558]: time="2025-09-12T05:50:14.400034885Z" level=info msg="StopPodSandbox for \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\"" Sep 12 05:50:14.409634 containerd[1558]: time="2025-09-12T05:50:14.409618449Z" level=info msg="Container to stop \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 05:50:14.409634 containerd[1558]: time="2025-09-12T05:50:14.409630221Z" level=info msg="Container to stop \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 05:50:14.409711 containerd[1558]: time="2025-09-12T05:50:14.409640250Z" level=info msg="Container to stop \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 05:50:14.409711 containerd[1558]: time="2025-09-12T05:50:14.409650510Z" level=info msg="Container to stop \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 05:50:14.409711 containerd[1558]: time="2025-09-12T05:50:14.409658345Z" level=info msg="Container to stop \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 05:50:14.416178 systemd[1]: cri-containerd-69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b.scope: Deactivated successfully. Sep 12 05:50:14.436324 containerd[1558]: time="2025-09-12T05:50:14.434143896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" id:\"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" pid:2861 exit_status:137 exited_at:{seconds:1757656214 nanos:417450036}" Sep 12 05:50:14.436410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239-shm.mount: Deactivated successfully. Sep 12 05:50:14.442124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b-rootfs.mount: Deactivated successfully. Sep 12 05:50:14.444110 containerd[1558]: time="2025-09-12T05:50:14.444056009Z" level=info msg="TearDown network for sandbox \"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\" successfully" Sep 12 05:50:14.444110 containerd[1558]: time="2025-09-12T05:50:14.444108840Z" level=info msg="StopPodSandbox for \"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\" returns successfully" Sep 12 05:50:14.445634 containerd[1558]: time="2025-09-12T05:50:14.445586941Z" level=info msg="received exit event sandbox_id:\"015b1b597940d0b59f3f14e47be294ff5aa2a451f626c8f8568794aa14461239\" exit_status:137 exited_at:{seconds:1757656214 nanos:343784972}" Sep 12 05:50:14.451218 containerd[1558]: time="2025-09-12T05:50:14.451131849Z" level=info msg="received exit event sandbox_id:\"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" exit_status:137 exited_at:{seconds:1757656214 nanos:417450036}" Sep 12 05:50:14.452488 containerd[1558]: time="2025-09-12T05:50:14.452461766Z" level=info msg="shim disconnected" id=69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b namespace=k8s.io Sep 12 05:50:14.452722 containerd[1558]: time="2025-09-12T05:50:14.452504117Z" level=warning msg="cleaning up after shim disconnected" id=69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b namespace=k8s.io Sep 12 05:50:14.452722 containerd[1558]: time="2025-09-12T05:50:14.452512904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 05:50:14.452722 containerd[1558]: time="2025-09-12T05:50:14.452819050Z" level=info msg="TearDown network for sandbox \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" successfully" Sep 12 05:50:14.452722 containerd[1558]: time="2025-09-12T05:50:14.452835392Z" level=info msg="StopPodSandbox for \"69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b\" returns successfully" Sep 12 05:50:14.537872 kubelet[2741]: I0912 05:50:14.537834 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-bpf-maps\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538360 kubelet[2741]: I0912 05:50:14.537893 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03efb06-e120-4665-9773-3851bdcb9833-cilium-config-path\") pod \"c03efb06-e120-4665-9773-3851bdcb9833\" (UID: \"c03efb06-e120-4665-9773-3851bdcb9833\") " Sep 12 05:50:14.538360 kubelet[2741]: I0912 05:50:14.537921 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-net\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538360 kubelet[2741]: I0912 05:50:14.537946 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hubble-tls\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538360 kubelet[2741]: I0912 05:50:14.537967 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-lib-modules\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538360 kubelet[2741]: I0912 05:50:14.538021 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-clustermesh-secrets\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538360 kubelet[2741]: I0912 05:50:14.538026 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.538525 kubelet[2741]: I0912 05:50:14.538043 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-run\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538525 kubelet[2741]: I0912 05:50:14.538061 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hostproc\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538525 kubelet[2741]: I0912 05:50:14.538081 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-xtables-lock\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538525 kubelet[2741]: I0912 05:50:14.538071 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.538525 kubelet[2741]: I0912 05:50:14.538103 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n5bh\" (UniqueName: \"kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-kube-api-access-8n5bh\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538525 kubelet[2741]: I0912 05:50:14.538194 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-config-path\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538671 kubelet[2741]: I0912 05:50:14.538223 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-kernel\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538671 kubelet[2741]: I0912 05:50:14.538243 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jrtl\" (UniqueName: \"kubernetes.io/projected/c03efb06-e120-4665-9773-3851bdcb9833-kube-api-access-5jrtl\") pod \"c03efb06-e120-4665-9773-3851bdcb9833\" (UID: \"c03efb06-e120-4665-9773-3851bdcb9833\") " Sep 12 05:50:14.538671 kubelet[2741]: I0912 05:50:14.538265 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-etc-cni-netd\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538671 kubelet[2741]: I0912 05:50:14.538284 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cni-path\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538671 kubelet[2741]: I0912 05:50:14.538300 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-cgroup\") pod \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\" (UID: \"782d0b1e-f95d-4104-bd6f-40c00ecd3c54\") " Sep 12 05:50:14.538671 kubelet[2741]: I0912 05:50:14.538360 2741 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.538671 kubelet[2741]: I0912 05:50:14.538372 2741 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.538835 kubelet[2741]: I0912 05:50:14.538411 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.541899 kubelet[2741]: I0912 05:50:14.541863 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 05:50:14.541949 kubelet[2741]: I0912 05:50:14.541908 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.542171 kubelet[2741]: I0912 05:50:14.542139 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03efb06-e120-4665-9773-3851bdcb9833-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c03efb06-e120-4665-9773-3851bdcb9833" (UID: "c03efb06-e120-4665-9773-3851bdcb9833"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 05:50:14.547823 kubelet[2741]: I0912 05:50:14.547672 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 05:50:14.547823 kubelet[2741]: I0912 05:50:14.547718 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.547823 kubelet[2741]: I0912 05:50:14.547739 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.547823 kubelet[2741]: I0912 05:50:14.547754 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cni-path" (OuterVolumeSpecName: "cni-path") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.547823 kubelet[2741]: I0912 05:50:14.547767 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hostproc" (OuterVolumeSpecName: "hostproc") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.547978 kubelet[2741]: I0912 05:50:14.547780 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.547978 kubelet[2741]: I0912 05:50:14.547794 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 05:50:14.548203 kubelet[2741]: I0912 05:50:14.548177 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 05:50:14.548314 kubelet[2741]: I0912 05:50:14.548214 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-kube-api-access-8n5bh" (OuterVolumeSpecName: "kube-api-access-8n5bh") pod "782d0b1e-f95d-4104-bd6f-40c00ecd3c54" (UID: "782d0b1e-f95d-4104-bd6f-40c00ecd3c54"). InnerVolumeSpecName "kube-api-access-8n5bh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 05:50:14.548314 kubelet[2741]: I0912 05:50:14.548224 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03efb06-e120-4665-9773-3851bdcb9833-kube-api-access-5jrtl" (OuterVolumeSpecName: "kube-api-access-5jrtl") pod "c03efb06-e120-4665-9773-3851bdcb9833" (UID: "c03efb06-e120-4665-9773-3851bdcb9833"). InnerVolumeSpecName "kube-api-access-5jrtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 05:50:14.638562 kubelet[2741]: I0912 05:50:14.638503 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638562 kubelet[2741]: I0912 05:50:14.638531 2741 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638562 kubelet[2741]: I0912 05:50:14.638569 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5jrtl\" (UniqueName: \"kubernetes.io/projected/c03efb06-e120-4665-9773-3851bdcb9833-kube-api-access-5jrtl\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638562 kubelet[2741]: I0912 05:50:14.638579 2741 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638562 kubelet[2741]: I0912 05:50:14.638587 2741 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638596 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638605 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03efb06-e120-4665-9773-3851bdcb9833-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638615 2741 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638623 2741 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638631 2741 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638639 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638647 2741 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.638851 kubelet[2741]: I0912 05:50:14.638656 2741 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.639101 kubelet[2741]: I0912 05:50:14.638664 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8n5bh\" (UniqueName: \"kubernetes.io/projected/782d0b1e-f95d-4104-bd6f-40c00ecd3c54-kube-api-access-8n5bh\") on node \"localhost\" DevicePath \"\"" Sep 12 05:50:14.664243 systemd[1]: Removed slice kubepods-burstable-pod782d0b1e_f95d_4104_bd6f_40c00ecd3c54.slice - libcontainer container kubepods-burstable-pod782d0b1e_f95d_4104_bd6f_40c00ecd3c54.slice. Sep 12 05:50:14.664785 systemd[1]: kubepods-burstable-pod782d0b1e_f95d_4104_bd6f_40c00ecd3c54.slice: Consumed 7.000s CPU time, 122.8M memory peak, 240K read from disk, 16.9M written to disk. Sep 12 05:50:14.666124 systemd[1]: Removed slice kubepods-besteffort-podc03efb06_e120_4665_9773_3851bdcb9833.slice - libcontainer container kubepods-besteffort-podc03efb06_e120_4665_9773_3851bdcb9833.slice. Sep 12 05:50:14.710054 kubelet[2741]: E0912 05:50:14.709981 2741 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 05:50:14.903583 kubelet[2741]: I0912 05:50:14.903445 2741 scope.go:117] "RemoveContainer" containerID="c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d" Sep 12 05:50:14.907836 containerd[1558]: time="2025-09-12T05:50:14.907799049Z" level=info msg="RemoveContainer for \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\"" Sep 12 05:50:14.914951 containerd[1558]: time="2025-09-12T05:50:14.914894317Z" level=info msg="RemoveContainer for \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" returns successfully" Sep 12 05:50:14.916049 kubelet[2741]: I0912 05:50:14.915871 2741 scope.go:117] "RemoveContainer" containerID="c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d" Sep 12 05:50:14.916943 containerd[1558]: time="2025-09-12T05:50:14.916767544Z" level=error msg="ContainerStatus for \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\": not found" Sep 12 05:50:14.920985 kubelet[2741]: E0912 05:50:14.920901 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\": not found" containerID="c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d" Sep 12 05:50:14.921233 kubelet[2741]: I0912 05:50:14.920956 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d"} err="failed to get container status \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4a977ab751c7b49572a16159965b5c78b0f2a248762ab4f45545c4ec219fd5d\": not found" Sep 12 05:50:14.921233 kubelet[2741]: I0912 05:50:14.921086 2741 scope.go:117] "RemoveContainer" containerID="f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8" Sep 12 05:50:14.924193 containerd[1558]: time="2025-09-12T05:50:14.924153037Z" level=info msg="RemoveContainer for \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\"" Sep 12 05:50:14.929632 containerd[1558]: time="2025-09-12T05:50:14.929600500Z" level=info msg="RemoveContainer for \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" returns successfully" Sep 12 05:50:14.929790 kubelet[2741]: I0912 05:50:14.929751 2741 scope.go:117] "RemoveContainer" containerID="ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533" Sep 12 05:50:14.932457 containerd[1558]: time="2025-09-12T05:50:14.932411343Z" level=info msg="RemoveContainer for \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\"" Sep 12 05:50:14.938226 containerd[1558]: time="2025-09-12T05:50:14.938124844Z" level=info msg="RemoveContainer for \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" returns successfully" Sep 12 05:50:14.938482 kubelet[2741]: I0912 05:50:14.938442 2741 scope.go:117] "RemoveContainer" containerID="9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b" Sep 12 05:50:14.940567 containerd[1558]: time="2025-09-12T05:50:14.940539129Z" level=info msg="RemoveContainer for \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\"" Sep 12 05:50:14.945082 containerd[1558]: time="2025-09-12T05:50:14.945059054Z" level=info msg="RemoveContainer for \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" returns successfully" Sep 12 05:50:14.947186 kubelet[2741]: I0912 05:50:14.947155 2741 scope.go:117] "RemoveContainer" containerID="bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc" Sep 12 05:50:14.948495 containerd[1558]: time="2025-09-12T05:50:14.948461531Z" level=info msg="RemoveContainer for \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\"" Sep 12 05:50:14.952419 containerd[1558]: time="2025-09-12T05:50:14.952379895Z" level=info msg="RemoveContainer for \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" returns successfully" Sep 12 05:50:14.952539 kubelet[2741]: I0912 05:50:14.952517 2741 scope.go:117] "RemoveContainer" containerID="53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c" Sep 12 05:50:14.953845 containerd[1558]: time="2025-09-12T05:50:14.953819201Z" level=info msg="RemoveContainer for \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\"" Sep 12 05:50:14.957389 containerd[1558]: time="2025-09-12T05:50:14.957356776Z" level=info msg="RemoveContainer for \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" returns successfully" Sep 12 05:50:14.957622 kubelet[2741]: I0912 05:50:14.957602 2741 scope.go:117] "RemoveContainer" containerID="f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8" Sep 12 05:50:14.957767 containerd[1558]: time="2025-09-12T05:50:14.957732515Z" level=error msg="ContainerStatus for \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\": not found" Sep 12 05:50:14.957875 kubelet[2741]: E0912 05:50:14.957856 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\": not found" containerID="f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8" Sep 12 05:50:14.957920 kubelet[2741]: I0912 05:50:14.957880 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8"} err="failed to get container status \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2798aaa3afaf3e94b27453d13d8955a7df5450a06a3cf6cff81491272a181e8\": not found" Sep 12 05:50:14.957920 kubelet[2741]: I0912 05:50:14.957897 2741 scope.go:117] "RemoveContainer" containerID="ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533" Sep 12 05:50:14.958106 containerd[1558]: time="2025-09-12T05:50:14.958065673Z" level=error msg="ContainerStatus for \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\": not found" Sep 12 05:50:14.958264 kubelet[2741]: E0912 05:50:14.958174 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\": not found" containerID="ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533" Sep 12 05:50:14.958264 kubelet[2741]: I0912 05:50:14.958190 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533"} err="failed to get container status \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab0f8b7394ab741bf243cff114ce0afc8eaeb85f58b4b7a03572dd60ee253533\": not found" Sep 12 05:50:14.958264 kubelet[2741]: I0912 05:50:14.958206 2741 scope.go:117] "RemoveContainer" containerID="9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b" Sep 12 05:50:14.958384 containerd[1558]: time="2025-09-12T05:50:14.958349967Z" level=error msg="ContainerStatus for \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\": not found" Sep 12 05:50:14.958489 kubelet[2741]: E0912 05:50:14.958459 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\": not found" containerID="9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b" Sep 12 05:50:14.958528 kubelet[2741]: I0912 05:50:14.958486 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b"} err="failed to get container status \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bbece37d267ebcf4f64a5b7678f940524e9c1ad9c2ff041c994d553185a065b\": not found" Sep 12 05:50:14.958528 kubelet[2741]: I0912 05:50:14.958501 2741 scope.go:117] "RemoveContainer" containerID="bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc" Sep 12 05:50:14.958654 containerd[1558]: time="2025-09-12T05:50:14.958624363Z" level=error msg="ContainerStatus for \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\": not found" Sep 12 05:50:14.958753 kubelet[2741]: E0912 05:50:14.958728 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\": not found" containerID="bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc" Sep 12 05:50:14.958753 kubelet[2741]: I0912 05:50:14.958749 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc"} err="failed to get container status \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"bbe4bb15522ad656bb81aeb9236e439c2081f6001beafade668b173626c053fc\": not found" Sep 12 05:50:14.958838 kubelet[2741]: I0912 05:50:14.958762 2741 scope.go:117] "RemoveContainer" containerID="53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c" Sep 12 05:50:14.958905 containerd[1558]: time="2025-09-12T05:50:14.958876916Z" level=error msg="ContainerStatus for \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\": not found" Sep 12 05:50:14.958989 kubelet[2741]: E0912 05:50:14.958968 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\": not found" containerID="53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c" Sep 12 05:50:14.958989 kubelet[2741]: I0912 05:50:14.958986 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c"} err="failed to get container status \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\": rpc error: code = NotFound desc = an error occurred when try to find container \"53fa6121961cb02f6fe01a4bb415bfa6d734e9602e3c98af241153ff2e07640c\": not found" Sep 12 05:50:15.296178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69e6e0f10a050975eece2e30112092174ba885cbc9376b270f028b9af9a78e7b-shm.mount: Deactivated successfully. Sep 12 05:50:15.296300 systemd[1]: var-lib-kubelet-pods-c03efb06\x2de120\x2d4665\x2d9773\x2d3851bdcb9833-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5jrtl.mount: Deactivated successfully. Sep 12 05:50:15.296380 systemd[1]: var-lib-kubelet-pods-782d0b1e\x2df95d\x2d4104\x2dbd6f\x2d40c00ecd3c54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8n5bh.mount: Deactivated successfully. Sep 12 05:50:15.296463 systemd[1]: var-lib-kubelet-pods-782d0b1e\x2df95d\x2d4104\x2dbd6f\x2d40c00ecd3c54-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 05:50:15.296539 systemd[1]: var-lib-kubelet-pods-782d0b1e\x2df95d\x2d4104\x2dbd6f\x2d40c00ecd3c54-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 05:50:15.653992 kubelet[2741]: E0912 05:50:15.653804 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:16.206097 sshd[4355]: Connection closed by 10.0.0.1 port 38424 Sep 12 05:50:16.206708 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:16.217993 systemd[1]: sshd@23-10.0.0.20:22-10.0.0.1:38424.service: Deactivated successfully. Sep 12 05:50:16.220157 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 05:50:16.220885 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Sep 12 05:50:16.224421 systemd[1]: Started sshd@24-10.0.0.20:22-10.0.0.1:38428.service - OpenSSH per-connection server daemon (10.0.0.1:38428). Sep 12 05:50:16.225157 systemd-logind[1540]: Removed session 24. Sep 12 05:50:16.234567 kubelet[2741]: I0912 05:50:16.234501 2741 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T05:50:16Z","lastTransitionTime":"2025-09-12T05:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 05:50:16.288273 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 38428 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:16.289572 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:16.294215 systemd-logind[1540]: New session 25 of user core. Sep 12 05:50:16.305126 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 05:50:16.656017 kubelet[2741]: I0912 05:50:16.655964 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782d0b1e-f95d-4104-bd6f-40c00ecd3c54" path="/var/lib/kubelet/pods/782d0b1e-f95d-4104-bd6f-40c00ecd3c54/volumes" Sep 12 05:50:16.656865 kubelet[2741]: I0912 05:50:16.656830 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03efb06-e120-4665-9773-3851bdcb9833" path="/var/lib/kubelet/pods/c03efb06-e120-4665-9773-3851bdcb9833/volumes" Sep 12 05:50:16.808252 sshd[4510]: Connection closed by 10.0.0.1 port 38428 Sep 12 05:50:16.812305 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:16.822070 systemd[1]: sshd@24-10.0.0.20:22-10.0.0.1:38428.service: Deactivated successfully. Sep 12 05:50:16.825488 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 05:50:16.827970 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Sep 12 05:50:16.837271 systemd[1]: Started sshd@25-10.0.0.20:22-10.0.0.1:38440.service - OpenSSH per-connection server daemon (10.0.0.1:38440). Sep 12 05:50:16.841416 systemd-logind[1540]: Removed session 25. Sep 12 05:50:16.857838 systemd[1]: Created slice kubepods-burstable-pod7a7c6eef_360d_40fc_89bb_d9cc9de8e22d.slice - libcontainer container kubepods-burstable-pod7a7c6eef_360d_40fc_89bb_d9cc9de8e22d.slice. Sep 12 05:50:16.892177 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 38440 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:16.893671 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:16.897878 systemd-logind[1540]: New session 26 of user core. Sep 12 05:50:16.909132 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 05:50:16.952665 kubelet[2741]: I0912 05:50:16.952592 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-bpf-maps\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952665 kubelet[2741]: I0912 05:50:16.952639 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-cni-path\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952665 kubelet[2741]: I0912 05:50:16.952666 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-cilium-config-path\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952906 kubelet[2741]: I0912 05:50:16.952684 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-cilium-cgroup\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952906 kubelet[2741]: I0912 05:50:16.952709 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-host-proc-sys-net\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952906 kubelet[2741]: I0912 05:50:16.952726 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-hubble-tls\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952906 kubelet[2741]: I0912 05:50:16.952744 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-hostproc\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952906 kubelet[2741]: I0912 05:50:16.952760 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-etc-cni-netd\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.952906 kubelet[2741]: I0912 05:50:16.952777 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-cilium-ipsec-secrets\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.953097 kubelet[2741]: I0912 05:50:16.952794 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjlh\" (UniqueName: \"kubernetes.io/projected/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-kube-api-access-hvjlh\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.953097 kubelet[2741]: I0912 05:50:16.952811 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-xtables-lock\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.953097 kubelet[2741]: I0912 05:50:16.952828 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-lib-modules\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.953097 kubelet[2741]: I0912 05:50:16.952845 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-host-proc-sys-kernel\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.953097 kubelet[2741]: I0912 05:50:16.952870 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-cilium-run\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.953097 kubelet[2741]: I0912 05:50:16.952887 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a7c6eef-360d-40fc-89bb-d9cc9de8e22d-clustermesh-secrets\") pod \"cilium-c9zfv\" (UID: \"7a7c6eef-360d-40fc-89bb-d9cc9de8e22d\") " pod="kube-system/cilium-c9zfv" Sep 12 05:50:16.963506 sshd[4525]: Connection closed by 10.0.0.1 port 38440 Sep 12 05:50:16.963903 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:16.977158 systemd[1]: sshd@25-10.0.0.20:22-10.0.0.1:38440.service: Deactivated successfully. Sep 12 05:50:16.979548 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 05:50:16.980593 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Sep 12 05:50:16.984396 systemd[1]: Started sshd@26-10.0.0.20:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). Sep 12 05:50:16.985138 systemd-logind[1540]: Removed session 26. Sep 12 05:50:17.043621 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:17.044946 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:17.049940 systemd-logind[1540]: New session 27 of user core. Sep 12 05:50:17.060254 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 05:50:17.164559 kubelet[2741]: E0912 05:50:17.162409 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:17.164717 containerd[1558]: time="2025-09-12T05:50:17.164141223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9zfv,Uid:7a7c6eef-360d-40fc-89bb-d9cc9de8e22d,Namespace:kube-system,Attempt:0,}" Sep 12 05:50:17.189932 containerd[1558]: time="2025-09-12T05:50:17.189676309Z" level=info msg="connecting to shim 9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305" address="unix:///run/containerd/s/bfcd79feaa6ea38e2c01b2dbae7e921c7153e004eb2d1fa227e0d18060513bf0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:50:17.221183 systemd[1]: Started cri-containerd-9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305.scope - libcontainer container 9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305. Sep 12 05:50:17.246216 containerd[1558]: time="2025-09-12T05:50:17.246123041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9zfv,Uid:7a7c6eef-360d-40fc-89bb-d9cc9de8e22d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\"" Sep 12 05:50:17.247084 kubelet[2741]: E0912 05:50:17.247053 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:17.253322 containerd[1558]: time="2025-09-12T05:50:17.253270978Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 05:50:17.260416 containerd[1558]: time="2025-09-12T05:50:17.260375012Z" level=info msg="Container a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:50:17.270115 containerd[1558]: time="2025-09-12T05:50:17.270069239Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b\"" Sep 12 05:50:17.270591 containerd[1558]: time="2025-09-12T05:50:17.270560789Z" level=info msg="StartContainer for \"a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b\"" Sep 12 05:50:17.271375 containerd[1558]: time="2025-09-12T05:50:17.271350058Z" level=info msg="connecting to shim a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b" address="unix:///run/containerd/s/bfcd79feaa6ea38e2c01b2dbae7e921c7153e004eb2d1fa227e0d18060513bf0" protocol=ttrpc version=3 Sep 12 05:50:17.293206 systemd[1]: Started cri-containerd-a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b.scope - libcontainer container a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b. Sep 12 05:50:17.326801 containerd[1558]: time="2025-09-12T05:50:17.326757583Z" level=info msg="StartContainer for \"a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b\" returns successfully" Sep 12 05:50:17.336415 systemd[1]: cri-containerd-a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b.scope: Deactivated successfully. Sep 12 05:50:17.338035 containerd[1558]: time="2025-09-12T05:50:17.337960704Z" level=info msg="received exit event container_id:\"a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b\" id:\"a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b\" pid:4605 exited_at:{seconds:1757656217 nanos:337717960}" Sep 12 05:50:17.338224 containerd[1558]: time="2025-09-12T05:50:17.338106272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b\" id:\"a292cff154f59719f2491df11aead5ec0dd20785852cd0cee7dc7baf73d6566b\" pid:4605 exited_at:{seconds:1757656217 nanos:337717960}" Sep 12 05:50:17.654281 kubelet[2741]: E0912 05:50:17.654234 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:17.921348 kubelet[2741]: E0912 05:50:17.921053 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:17.928368 containerd[1558]: time="2025-09-12T05:50:17.928322414Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 05:50:17.935468 containerd[1558]: time="2025-09-12T05:50:17.935411749Z" level=info msg="Container 0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:50:17.948095 containerd[1558]: time="2025-09-12T05:50:17.947716527Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b\"" Sep 12 05:50:17.952378 containerd[1558]: time="2025-09-12T05:50:17.952319029Z" level=info msg="StartContainer for \"0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b\"" Sep 12 05:50:17.953450 containerd[1558]: time="2025-09-12T05:50:17.953421416Z" level=info msg="connecting to shim 0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b" address="unix:///run/containerd/s/bfcd79feaa6ea38e2c01b2dbae7e921c7153e004eb2d1fa227e0d18060513bf0" protocol=ttrpc version=3 Sep 12 05:50:17.976221 systemd[1]: Started cri-containerd-0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b.scope - libcontainer container 0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b. Sep 12 05:50:18.007871 containerd[1558]: time="2025-09-12T05:50:18.007819575Z" level=info msg="StartContainer for \"0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b\" returns successfully" Sep 12 05:50:18.015279 systemd[1]: cri-containerd-0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b.scope: Deactivated successfully. Sep 12 05:50:18.015750 containerd[1558]: time="2025-09-12T05:50:18.015708783Z" level=info msg="received exit event container_id:\"0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b\" id:\"0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b\" pid:4651 exited_at:{seconds:1757656218 nanos:15481217}" Sep 12 05:50:18.015858 containerd[1558]: time="2025-09-12T05:50:18.015819184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b\" id:\"0eff56aaccaa3c66ae74e25f45c2d85272537392551a51d71a2c69404e46263b\" pid:4651 exited_at:{seconds:1757656218 nanos:15481217}" Sep 12 05:50:18.925794 kubelet[2741]: E0912 05:50:18.925715 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:18.932329 containerd[1558]: time="2025-09-12T05:50:18.932255249Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 05:50:18.949855 containerd[1558]: time="2025-09-12T05:50:18.949770914Z" level=info msg="Container 8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:50:18.955089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223413211.mount: Deactivated successfully. Sep 12 05:50:18.959939 containerd[1558]: time="2025-09-12T05:50:18.959882799Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02\"" Sep 12 05:50:18.962042 containerd[1558]: time="2025-09-12T05:50:18.960559744Z" level=info msg="StartContainer for \"8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02\"" Sep 12 05:50:18.962446 containerd[1558]: time="2025-09-12T05:50:18.962410841Z" level=info msg="connecting to shim 8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02" address="unix:///run/containerd/s/bfcd79feaa6ea38e2c01b2dbae7e921c7153e004eb2d1fa227e0d18060513bf0" protocol=ttrpc version=3 Sep 12 05:50:18.991338 systemd[1]: Started cri-containerd-8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02.scope - libcontainer container 8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02. Sep 12 05:50:19.045171 containerd[1558]: time="2025-09-12T05:50:19.045131094Z" level=info msg="StartContainer for \"8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02\" returns successfully" Sep 12 05:50:19.045779 systemd[1]: cri-containerd-8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02.scope: Deactivated successfully. Sep 12 05:50:19.046757 containerd[1558]: time="2025-09-12T05:50:19.046642511Z" level=info msg="received exit event container_id:\"8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02\" id:\"8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02\" pid:4694 exited_at:{seconds:1757656219 nanos:46466244}" Sep 12 05:50:19.047051 containerd[1558]: time="2025-09-12T05:50:19.047021695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02\" id:\"8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02\" pid:4694 exited_at:{seconds:1757656219 nanos:46466244}" Sep 12 05:50:19.079950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d1606bb986dbac48834fbbada909287bb7adb0ba899c7a63e09e5dc8c90fc02-rootfs.mount: Deactivated successfully. Sep 12 05:50:19.711630 kubelet[2741]: E0912 05:50:19.711566 2741 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 05:50:19.930444 kubelet[2741]: E0912 05:50:19.930389 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:19.937584 containerd[1558]: time="2025-09-12T05:50:19.937513085Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 05:50:19.947031 containerd[1558]: time="2025-09-12T05:50:19.946914895Z" level=info msg="Container 0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:50:19.956330 containerd[1558]: time="2025-09-12T05:50:19.956270838Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb\"" Sep 12 05:50:19.956817 containerd[1558]: time="2025-09-12T05:50:19.956787355Z" level=info msg="StartContainer for \"0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb\"" Sep 12 05:50:19.957589 containerd[1558]: time="2025-09-12T05:50:19.957566593Z" level=info msg="connecting to shim 0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb" address="unix:///run/containerd/s/bfcd79feaa6ea38e2c01b2dbae7e921c7153e004eb2d1fa227e0d18060513bf0" protocol=ttrpc version=3 Sep 12 05:50:19.981187 systemd[1]: Started cri-containerd-0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb.scope - libcontainer container 0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb. Sep 12 05:50:20.011976 systemd[1]: cri-containerd-0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb.scope: Deactivated successfully. Sep 12 05:50:20.021056 containerd[1558]: time="2025-09-12T05:50:20.012476727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb\" id:\"0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb\" pid:4735 exited_at:{seconds:1757656220 nanos:12250745}" Sep 12 05:50:20.064321 containerd[1558]: time="2025-09-12T05:50:20.064262155Z" level=info msg="received exit event container_id:\"0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb\" id:\"0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb\" pid:4735 exited_at:{seconds:1757656220 nanos:12250745}" Sep 12 05:50:20.072615 containerd[1558]: time="2025-09-12T05:50:20.072569758Z" level=info msg="StartContainer for \"0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb\" returns successfully" Sep 12 05:50:20.087613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0acb5f05e08064e1c49bd6bfbf3d2d47769a10edf6b42f1fa0e4fac61886a8eb-rootfs.mount: Deactivated successfully. Sep 12 05:50:20.654128 kubelet[2741]: E0912 05:50:20.654081 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:20.934740 kubelet[2741]: E0912 05:50:20.934618 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:20.973558 containerd[1558]: time="2025-09-12T05:50:20.973501486Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 05:50:20.997355 containerd[1558]: time="2025-09-12T05:50:20.997301199Z" level=info msg="Container 075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:50:21.005256 containerd[1558]: time="2025-09-12T05:50:21.005222129Z" level=info msg="CreateContainer within sandbox \"9e7bbe9b013ceb13c02c2837f477c27293b6c27fae06ec9f42d136bfd4617305\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\"" Sep 12 05:50:21.006071 containerd[1558]: time="2025-09-12T05:50:21.005717255Z" level=info msg="StartContainer for \"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\"" Sep 12 05:50:21.006646 containerd[1558]: time="2025-09-12T05:50:21.006620288Z" level=info msg="connecting to shim 075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697" address="unix:///run/containerd/s/bfcd79feaa6ea38e2c01b2dbae7e921c7153e004eb2d1fa227e0d18060513bf0" protocol=ttrpc version=3 Sep 12 05:50:21.031143 systemd[1]: Started cri-containerd-075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697.scope - libcontainer container 075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697. Sep 12 05:50:21.066906 containerd[1558]: time="2025-09-12T05:50:21.066855981Z" level=info msg="StartContainer for \"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\" returns successfully" Sep 12 05:50:21.131449 containerd[1558]: time="2025-09-12T05:50:21.131403583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\" id:\"b1017b4eb74078d83d5132bfdf0b70807966aa56bb5625b69e26090ffabf7bdb\" pid:4802 exited_at:{seconds:1757656221 nanos:131119091}" Sep 12 05:50:21.496044 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 05:50:21.941337 kubelet[2741]: E0912 05:50:21.941291 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:21.955199 kubelet[2741]: I0912 05:50:21.955120 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c9zfv" podStartSLOduration=5.955090264 podStartE2EDuration="5.955090264s" podCreationTimestamp="2025-09-12 05:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:50:21.954782336 +0000 UTC m=+87.411236234" watchObservedRunningTime="2025-09-12 05:50:21.955090264 +0000 UTC m=+87.411544162" Sep 12 05:50:23.164248 kubelet[2741]: E0912 05:50:23.164188 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:23.405335 containerd[1558]: time="2025-09-12T05:50:23.405272135Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\" id:\"f8483bd3364daf45b429408bcc20bdc239a337f0f075abed46eb622ec1be9d0c\" pid:4962 exit_status:1 exited_at:{seconds:1757656223 nanos:404725142}" Sep 12 05:50:24.695988 systemd-networkd[1487]: lxc_health: Link UP Sep 12 05:50:24.696788 systemd-networkd[1487]: lxc_health: Gained carrier Sep 12 05:50:25.164362 kubelet[2741]: E0912 05:50:25.164244 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:25.520433 containerd[1558]: time="2025-09-12T05:50:25.519987885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\" id:\"19a498027a80740465aa6a28f75c51da744d2a839c39bbde1b413c316e18e12c\" pid:5334 exited_at:{seconds:1757656225 nanos:519509393}" Sep 12 05:50:25.949030 kubelet[2741]: E0912 05:50:25.948596 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:26.568327 systemd-networkd[1487]: lxc_health: Gained IPv6LL Sep 12 05:50:26.950838 kubelet[2741]: E0912 05:50:26.950667 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:27.627505 containerd[1558]: time="2025-09-12T05:50:27.627454006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\" id:\"2e8cc308226f04d511db03f3cd53d6df3ef7aa25d8cc8199355ff92214b86710\" pid:5369 exited_at:{seconds:1757656227 nanos:627160337}" Sep 12 05:50:29.721589 containerd[1558]: time="2025-09-12T05:50:29.721524417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075da90ad5d0dcc7cade97d9171ce5007c35855f7497be9e3c3b3601839d3697\" id:\"2331c68263c6758b857395adf185aee382183c40a7da436211cf58945c3df479\" pid:5399 exited_at:{seconds:1757656229 nanos:721250546}" Sep 12 05:50:29.727699 sshd[4540]: Connection closed by 10.0.0.1 port 38450 Sep 12 05:50:29.728363 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:29.732913 systemd[1]: sshd@26-10.0.0.20:22-10.0.0.1:38450.service: Deactivated successfully. Sep 12 05:50:29.735052 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 05:50:29.735754 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Sep 12 05:50:29.737143 systemd-logind[1540]: Removed session 27.