Sep 16 04:57:52.849226 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:57:52.849254 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:57:52.849265 kernel: BIOS-provided physical RAM map: Sep 16 04:57:52.849272 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:57:52.849278 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 16 04:57:52.849285 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 16 04:57:52.849293 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 16 04:57:52.849299 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 16 04:57:52.849308 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 16 04:57:52.849317 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 16 04:57:52.849324 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 16 04:57:52.849331 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 16 04:57:52.849337 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 16 04:57:52.849344 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 16 04:57:52.849352 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 16 04:57:52.849362 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 16 04:57:52.849371 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 16 04:57:52.849378 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 16 04:57:52.849385 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 16 04:57:52.849392 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 16 04:57:52.849399 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 16 04:57:52.849406 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 16 04:57:52.849413 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 04:57:52.849420 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:57:52.849427 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 16 04:57:52.849436 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 04:57:52.849443 kernel: NX (Execute Disable) protection: active Sep 16 04:57:52.849450 kernel: APIC: Static calls initialized Sep 16 04:57:52.849457 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 16 04:57:52.849465 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 16 04:57:52.849471 kernel: extended physical RAM map: Sep 16 04:57:52.849479 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:57:52.849486 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 16 04:57:52.849493 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 16 04:57:52.849500 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 16 04:57:52.849507 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 16 04:57:52.849516 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 16 04:57:52.849523 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 16 04:57:52.849530 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 16 04:57:52.849537 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 16 04:57:52.849548 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 16 04:57:52.849556 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 16 04:57:52.849574 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 16 04:57:52.849582 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 16 04:57:52.849589 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 16 04:57:52.849596 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 16 04:57:52.849604 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 16 04:57:52.849611 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 16 04:57:52.849618 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 16 04:57:52.849628 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 16 04:57:52.849636 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 16 04:57:52.849645 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 16 04:57:52.849656 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 16 04:57:52.849663 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 16 04:57:52.849670 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 04:57:52.849677 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:57:52.849685 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 16 04:57:52.849692 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 04:57:52.849702 kernel: efi: EFI v2.7 by EDK II Sep 16 04:57:52.849710 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 16 04:57:52.849717 kernel: random: crng init done Sep 16 04:57:52.849727 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 16 04:57:52.849734 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 16 04:57:52.849746 kernel: secureboot: Secure boot disabled Sep 16 04:57:52.849754 kernel: SMBIOS 2.8 present. Sep 16 04:57:52.849761 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 16 04:57:52.849769 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:57:52.849776 kernel: Hypervisor detected: KVM Sep 16 04:57:52.849783 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:57:52.849791 kernel: kvm-clock: using sched offset of 5106920706 cycles Sep 16 04:57:52.849799 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:57:52.849806 kernel: tsc: Detected 2794.750 MHz processor Sep 16 04:57:52.849814 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:57:52.849822 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:57:52.849832 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 16 04:57:52.849840 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 04:57:52.849847 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:57:52.849855 kernel: Using GB pages for direct mapping Sep 16 04:57:52.849862 kernel: ACPI: Early table checksum verification disabled Sep 16 04:57:52.849870 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 16 04:57:52.849877 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 16 04:57:52.849885 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:57:52.849893 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:57:52.849902 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 16 04:57:52.849910 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:57:52.849917 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:57:52.849925 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:57:52.849932 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:57:52.849940 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 16 04:57:52.849947 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 16 04:57:52.849955 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 16 04:57:52.849964 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 16 04:57:52.849972 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 16 04:57:52.849979 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 16 04:57:52.849987 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 16 04:57:52.849994 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 16 04:57:52.850001 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 16 04:57:52.850009 kernel: No NUMA configuration found Sep 16 04:57:52.850016 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 16 04:57:52.850048 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 16 04:57:52.850057 kernel: Zone ranges: Sep 16 04:57:52.850067 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:57:52.850075 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 16 04:57:52.850082 kernel: Normal empty Sep 16 04:57:52.850089 kernel: Device empty Sep 16 04:57:52.850097 kernel: Movable zone start for each node Sep 16 04:57:52.850104 kernel: Early memory node ranges Sep 16 04:57:52.850112 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 16 04:57:52.850119 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 16 04:57:52.850129 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 16 04:57:52.850139 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 16 04:57:52.850146 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 16 04:57:52.850154 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 16 04:57:52.850161 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 16 04:57:52.850169 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 16 04:57:52.850176 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 16 04:57:52.850184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:57:52.850194 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 16 04:57:52.850211 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 16 04:57:52.850219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:57:52.850226 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 16 04:57:52.850234 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 16 04:57:52.850245 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 16 04:57:52.850252 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 16 04:57:52.850260 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 16 04:57:52.850268 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 16 04:57:52.850276 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:57:52.850286 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:57:52.850294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 16 04:57:52.850302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:57:52.850309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:57:52.850317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:57:52.850325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:57:52.850333 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:57:52.850341 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 04:57:52.850348 kernel: TSC deadline timer available Sep 16 04:57:52.850358 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:57:52.850366 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:57:52.850374 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:57:52.850381 kernel: CPU topo: Max. threads per core: 1 Sep 16 04:57:52.850402 kernel: CPU topo: Num. cores per package: 4 Sep 16 04:57:52.850410 kernel: CPU topo: Num. threads per package: 4 Sep 16 04:57:52.850418 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 16 04:57:52.850425 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 04:57:52.850433 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 16 04:57:52.850441 kernel: kvm-guest: setup PV sched yield Sep 16 04:57:52.850452 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 16 04:57:52.850460 kernel: Booting paravirtualized kernel on KVM Sep 16 04:57:52.850468 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:57:52.850476 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 16 04:57:52.850484 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 16 04:57:52.850491 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 16 04:57:52.850499 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 16 04:57:52.850507 kernel: kvm-guest: PV spinlocks enabled Sep 16 04:57:52.850517 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:57:52.850526 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:57:52.850537 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:57:52.850545 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:57:52.850553 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:57:52.850568 kernel: Fallback order for Node 0: 0 Sep 16 04:57:52.850576 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 16 04:57:52.850584 kernel: Policy zone: DMA32 Sep 16 04:57:52.850592 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:57:52.850602 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 16 04:57:52.850610 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:57:52.850617 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:57:52.850625 kernel: Dynamic Preempt: voluntary Sep 16 04:57:52.850633 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:57:52.850641 kernel: rcu: RCU event tracing is enabled. Sep 16 04:57:52.850649 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 16 04:57:52.850657 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:57:52.850665 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:57:52.850676 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:57:52.850685 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:57:52.850696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 16 04:57:52.850704 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:57:52.850712 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:57:52.850720 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:57:52.850728 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 16 04:57:52.850736 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:57:52.850743 kernel: Console: colour dummy device 80x25 Sep 16 04:57:52.850754 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:57:52.850761 kernel: ACPI: Core revision 20240827 Sep 16 04:57:52.850769 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 16 04:57:52.850777 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:57:52.850785 kernel: x2apic enabled Sep 16 04:57:52.850792 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:57:52.850800 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 16 04:57:52.850808 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 16 04:57:52.850816 kernel: kvm-guest: setup PV IPIs Sep 16 04:57:52.850826 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 16 04:57:52.850834 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 16 04:57:52.850842 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 16 04:57:52.850850 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 04:57:52.850857 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 16 04:57:52.850865 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 16 04:57:52.850873 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:57:52.850881 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:57:52.850891 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:57:52.850898 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 16 04:57:52.850906 kernel: active return thunk: retbleed_return_thunk Sep 16 04:57:52.850914 kernel: RETBleed: Mitigation: untrained return thunk Sep 16 04:57:52.850924 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 04:57:52.850932 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 04:57:52.850940 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 16 04:57:52.850948 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 16 04:57:52.850956 kernel: active return thunk: srso_return_thunk Sep 16 04:57:52.850966 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 16 04:57:52.850974 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:57:52.850982 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:57:52.850989 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:57:52.850997 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:57:52.851005 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 16 04:57:52.851012 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:57:52.851033 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:57:52.851052 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:57:52.851064 kernel: landlock: Up and running. Sep 16 04:57:52.851071 kernel: SELinux: Initializing. Sep 16 04:57:52.851079 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:57:52.851087 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:57:52.851095 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 16 04:57:52.851102 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 16 04:57:52.851110 kernel: ... version: 0 Sep 16 04:57:52.851118 kernel: ... bit width: 48 Sep 16 04:57:52.851126 kernel: ... generic registers: 6 Sep 16 04:57:52.851136 kernel: ... value mask: 0000ffffffffffff Sep 16 04:57:52.851143 kernel: ... max period: 00007fffffffffff Sep 16 04:57:52.851151 kernel: ... fixed-purpose events: 0 Sep 16 04:57:52.851159 kernel: ... event mask: 000000000000003f Sep 16 04:57:52.851166 kernel: signal: max sigframe size: 1776 Sep 16 04:57:52.851174 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:57:52.851182 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:57:52.851193 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:57:52.851201 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:57:52.851211 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:57:52.851218 kernel: .... node #0, CPUs: #1 #2 #3 Sep 16 04:57:52.851226 kernel: smp: Brought up 1 node, 4 CPUs Sep 16 04:57:52.851234 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 16 04:57:52.851242 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 137196K reserved, 0K cma-reserved) Sep 16 04:57:52.851250 kernel: devtmpfs: initialized Sep 16 04:57:52.851257 kernel: x86/mm: Memory block size: 128MB Sep 16 04:57:52.851277 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 16 04:57:52.851285 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 16 04:57:52.851296 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 16 04:57:52.851304 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 16 04:57:52.851312 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 16 04:57:52.851319 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 16 04:57:52.851345 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:57:52.851362 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 16 04:57:52.851380 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:57:52.851389 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:57:52.851397 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:57:52.851408 kernel: audit: type=2000 audit(1757998670.011:1): state=initialized audit_enabled=0 res=1 Sep 16 04:57:52.851416 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:57:52.851423 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:57:52.851431 kernel: cpuidle: using governor menu Sep 16 04:57:52.851439 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:57:52.851447 kernel: dca service started, version 1.12.1 Sep 16 04:57:52.851454 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 16 04:57:52.851462 kernel: PCI: Using configuration type 1 for base access Sep 16 04:57:52.851470 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:57:52.851480 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:57:52.851488 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:57:52.851495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:57:52.851503 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:57:52.851511 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:57:52.851518 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:57:52.851526 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:57:52.851534 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:57:52.851544 kernel: ACPI: Interpreter enabled Sep 16 04:57:52.851551 kernel: ACPI: PM: (supports S0 S3 S5) Sep 16 04:57:52.851559 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:57:52.851573 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:57:52.851581 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 04:57:52.851599 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 16 04:57:52.851607 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:57:52.851839 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:57:52.851974 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 16 04:57:52.852121 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 16 04:57:52.852134 kernel: PCI host bridge to bus 0000:00 Sep 16 04:57:52.852273 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:57:52.852386 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:57:52.852515 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:57:52.852640 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 16 04:57:52.852766 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 16 04:57:52.852877 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 16 04:57:52.852988 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:57:52.853161 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:57:52.853310 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 16 04:57:52.853434 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 16 04:57:52.853555 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 16 04:57:52.853747 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 16 04:57:52.853939 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 04:57:52.854180 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 04:57:52.854430 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 16 04:57:52.854587 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 16 04:57:52.854731 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 16 04:57:52.854923 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 16 04:57:52.855100 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 16 04:57:52.855289 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 16 04:57:52.855448 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 16 04:57:52.855624 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 16 04:57:52.855777 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 16 04:57:52.855917 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 16 04:57:52.856095 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 16 04:57:52.856280 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 16 04:57:52.856452 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:57:52.856614 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 16 04:57:52.856778 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 16 04:57:52.856930 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 16 04:57:52.857120 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 16 04:57:52.857291 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 16 04:57:52.857435 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 16 04:57:52.857448 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:57:52.857458 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:57:52.857469 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:57:52.857480 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:57:52.857490 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 16 04:57:52.857501 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 16 04:57:52.857516 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 16 04:57:52.857524 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 16 04:57:52.857532 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 16 04:57:52.857540 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 16 04:57:52.857548 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 16 04:57:52.857555 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 16 04:57:52.857573 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 16 04:57:52.857581 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 16 04:57:52.857589 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 16 04:57:52.857599 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 16 04:57:52.857607 kernel: iommu: Default domain type: Translated Sep 16 04:57:52.857617 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:57:52.857628 kernel: efivars: Registered efivars operations Sep 16 04:57:52.857638 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:57:52.857648 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:57:52.857657 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 16 04:57:52.857667 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 16 04:57:52.857678 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 16 04:57:52.857690 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 16 04:57:52.857698 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 16 04:57:52.857705 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 16 04:57:52.857714 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 16 04:57:52.857721 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 16 04:57:52.857868 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 16 04:57:52.858012 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 16 04:57:52.858177 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 04:57:52.858194 kernel: vgaarb: loaded Sep 16 04:57:52.858202 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 16 04:57:52.858210 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 16 04:57:52.858220 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:57:52.858231 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:57:52.858242 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:57:52.858252 kernel: pnp: PnP ACPI init Sep 16 04:57:52.858460 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 16 04:57:52.858487 kernel: pnp: PnP ACPI: found 6 devices Sep 16 04:57:52.858498 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:57:52.858509 kernel: NET: Registered PF_INET protocol family Sep 16 04:57:52.858519 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:57:52.858527 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:57:52.858536 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:57:52.858544 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:57:52.858552 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:57:52.858571 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:57:52.858582 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:57:52.858590 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:57:52.858598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:57:52.858606 kernel: NET: Registered PF_XDP protocol family Sep 16 04:57:52.858744 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 16 04:57:52.858898 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 16 04:57:52.859070 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:57:52.859190 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:57:52.859327 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:57:52.859482 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 16 04:57:52.859664 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 16 04:57:52.859802 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 16 04:57:52.859814 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:57:52.859823 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 16 04:57:52.859831 kernel: Initialise system trusted keyrings Sep 16 04:57:52.859845 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:57:52.859853 kernel: Key type asymmetric registered Sep 16 04:57:52.859861 kernel: Asymmetric key parser 'x509' registered Sep 16 04:57:52.859870 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:57:52.859878 kernel: io scheduler mq-deadline registered Sep 16 04:57:52.859887 kernel: io scheduler kyber registered Sep 16 04:57:52.859896 kernel: io scheduler bfq registered Sep 16 04:57:52.859910 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:57:52.859922 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 16 04:57:52.859934 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 16 04:57:52.859945 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 16 04:57:52.859956 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:57:52.859968 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:57:52.859980 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:57:52.859991 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:57:52.860003 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:57:52.860208 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 16 04:57:52.860223 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:57:52.860342 kernel: rtc_cmos 00:04: registered as rtc0 Sep 16 04:57:52.860459 kernel: rtc_cmos 00:04: setting system clock to 2025-09-16T04:57:52 UTC (1757998672) Sep 16 04:57:52.860586 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 16 04:57:52.860597 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 16 04:57:52.860606 kernel: efifb: probing for efifb Sep 16 04:57:52.860614 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 16 04:57:52.860628 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 16 04:57:52.860636 kernel: efifb: scrolling: redraw Sep 16 04:57:52.860644 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:57:52.860653 kernel: Console: switching to colour frame buffer device 160x50 Sep 16 04:57:52.860661 kernel: fb0: EFI VGA frame buffer device Sep 16 04:57:52.860669 kernel: pstore: Using crash dump compression: deflate Sep 16 04:57:52.860678 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:57:52.860686 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:57:52.860694 kernel: Segment Routing with IPv6 Sep 16 04:57:52.860704 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:57:52.860713 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:57:52.860722 kernel: Key type dns_resolver registered Sep 16 04:57:52.860731 kernel: IPI shorthand broadcast: enabled Sep 16 04:57:52.860740 kernel: sched_clock: Marking stable (3897002835, 161352767)->(4163182677, -104827075) Sep 16 04:57:52.860750 kernel: registered taskstats version 1 Sep 16 04:57:52.860758 kernel: Loading compiled-in X.509 certificates Sep 16 04:57:52.860766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:57:52.860774 kernel: Demotion targets for Node 0: null Sep 16 04:57:52.860785 kernel: Key type .fscrypt registered Sep 16 04:57:52.860792 kernel: Key type fscrypt-provisioning registered Sep 16 04:57:52.860801 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:57:52.860809 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:57:52.860817 kernel: ima: No architecture policies found Sep 16 04:57:52.860825 kernel: clk: Disabling unused clocks Sep 16 04:57:52.860833 kernel: Warning: unable to open an initial console. Sep 16 04:57:52.860841 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:57:52.860850 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:57:52.860860 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:57:52.860868 kernel: Run /init as init process Sep 16 04:57:52.860876 kernel: with arguments: Sep 16 04:57:52.860884 kernel: /init Sep 16 04:57:52.860892 kernel: with environment: Sep 16 04:57:52.860900 kernel: HOME=/ Sep 16 04:57:52.860908 kernel: TERM=linux Sep 16 04:57:52.860916 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:57:52.860925 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:57:52.860939 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:57:52.860949 systemd[1]: Detected virtualization kvm. Sep 16 04:57:52.860957 systemd[1]: Detected architecture x86-64. Sep 16 04:57:52.860966 systemd[1]: Running in initrd. Sep 16 04:57:52.860974 systemd[1]: No hostname configured, using default hostname. Sep 16 04:57:52.860983 systemd[1]: Hostname set to . Sep 16 04:57:52.860992 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:57:52.861003 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:57:52.861012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:52.861043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:52.861053 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:57:52.861062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:57:52.861071 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:57:52.861081 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:57:52.861093 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:57:52.861102 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:57:52.861111 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:52.861122 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:52.861134 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:57:52.861146 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:57:52.861158 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:57:52.861167 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:57:52.861180 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:57:52.861192 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:57:52.861205 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:57:52.861217 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:57:52.861228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:52.861240 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:52.861252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:52.861264 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:57:52.861273 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:57:52.861285 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:57:52.861294 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:57:52.861305 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:57:52.861314 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:57:52.861323 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:57:52.861332 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:57:52.861340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:52.861349 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:57:52.861360 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:52.861371 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:57:52.861380 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:57:52.861419 systemd-journald[218]: Collecting audit messages is disabled. Sep 16 04:57:52.861450 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:57:52.861465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:57:52.861478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:52.861490 systemd-journald[218]: Journal started Sep 16 04:57:52.861524 systemd-journald[218]: Runtime Journal (/run/log/journal/7355e27c3e404e61b7046621c0e30ab5) is 6M, max 48.4M, 42.4M free. Sep 16 04:57:52.854043 systemd-modules-load[221]: Inserted module 'overlay' Sep 16 04:57:52.863887 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:57:52.869299 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:57:52.872461 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:57:52.875211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:52.883060 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:57:52.885312 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 16 04:57:52.886060 kernel: Bridge firewalling registered Sep 16 04:57:52.886430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:52.888981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:57:52.889464 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:57:52.895513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:52.903647 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:57:52.905366 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:57:52.906747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:52.926005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:57:52.940807 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:57:52.981506 systemd-resolved[262]: Positive Trust Anchors: Sep 16 04:57:52.981532 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:57:52.981573 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:57:52.984388 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 16 04:57:52.985676 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:57:52.990523 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:53.060068 kernel: SCSI subsystem initialized Sep 16 04:57:53.069055 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:57:53.079053 kernel: iscsi: registered transport (tcp) Sep 16 04:57:53.101063 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:57:53.101107 kernel: QLogic iSCSI HBA Driver Sep 16 04:57:53.123587 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:57:53.150503 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:53.155277 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:57:53.208415 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:57:53.212179 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:57:53.280085 kernel: raid6: avx2x4 gen() 22097 MB/s Sep 16 04:57:53.297064 kernel: raid6: avx2x2 gen() 29066 MB/s Sep 16 04:57:53.314200 kernel: raid6: avx2x1 gen() 24697 MB/s Sep 16 04:57:53.314301 kernel: raid6: using algorithm avx2x2 gen() 29066 MB/s Sep 16 04:57:53.332235 kernel: raid6: .... xor() 18664 MB/s, rmw enabled Sep 16 04:57:53.332354 kernel: raid6: using avx2x2 recovery algorithm Sep 16 04:57:53.355094 kernel: xor: automatically using best checksumming function avx Sep 16 04:57:53.582123 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:57:53.594076 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:57:53.599241 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:53.647828 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 16 04:57:53.656665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:53.662360 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:57:53.695826 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 16 04:57:53.733631 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:57:53.739523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:57:53.843624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:53.848785 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:57:53.907052 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:57:53.913224 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 16 04:57:53.918063 kernel: libata version 3.00 loaded. Sep 16 04:57:53.922251 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 16 04:57:53.934381 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:57:53.934466 kernel: GPT:9289727 != 19775487 Sep 16 04:57:53.934482 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:57:53.934496 kernel: GPT:9289727 != 19775487 Sep 16 04:57:53.934509 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:57:53.934545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:57:53.936484 kernel: ahci 0000:00:1f.2: version 3.0 Sep 16 04:57:53.936852 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 16 04:57:53.939678 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 16 04:57:53.939867 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 16 04:57:53.940012 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 16 04:57:53.946074 kernel: AES CTR mode by8 optimization enabled Sep 16 04:57:53.946146 kernel: scsi host0: ahci Sep 16 04:57:53.948705 kernel: scsi host1: ahci Sep 16 04:57:53.949231 kernel: scsi host2: ahci Sep 16 04:57:53.956060 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 04:57:53.956204 kernel: scsi host3: ahci Sep 16 04:57:53.983348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:53.989474 kernel: scsi host4: ahci Sep 16 04:57:53.992594 kernel: scsi host5: ahci Sep 16 04:57:53.985647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:53.999863 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 16 04:57:53.999885 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 16 04:57:53.999895 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 16 04:57:53.999906 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 16 04:57:53.999916 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 16 04:57:53.999935 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 16 04:57:53.989513 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:53.994245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:54.002638 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:54.011641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:54.011806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:54.039320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 16 04:57:54.062415 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 16 04:57:54.070782 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 16 04:57:54.071362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 16 04:57:54.080272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:57:54.083288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:57:54.085344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:54.102147 disk-uuid[631]: Primary Header is updated. Sep 16 04:57:54.102147 disk-uuid[631]: Secondary Entries is updated. Sep 16 04:57:54.102147 disk-uuid[631]: Secondary Header is updated. Sep 16 04:57:54.108061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:57:54.110718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:54.114523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:57:54.312690 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 16 04:57:54.312780 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 16 04:57:54.312793 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 16 04:57:54.312805 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 16 04:57:54.314070 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 16 04:57:54.315082 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 16 04:57:54.316073 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 04:57:54.316100 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 16 04:57:54.316668 kernel: ata3.00: applying bridge limits Sep 16 04:57:54.318367 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 04:57:54.318383 kernel: ata3.00: configured for UDMA/100 Sep 16 04:57:54.319048 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 16 04:57:54.381112 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 16 04:57:54.381584 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:57:54.407070 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 16 04:57:54.763778 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:57:54.764790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:57:54.766362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:54.766663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:57:54.768064 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:57:54.793193 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:57:55.115091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:57:55.116263 disk-uuid[634]: The operation has completed successfully. Sep 16 04:57:55.153207 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:57:55.153333 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:57:55.186112 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:57:55.214092 sh[665]: Success Sep 16 04:57:55.234991 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:57:55.235085 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:57:55.235105 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:57:55.246061 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 04:57:55.280654 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:57:55.284710 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:57:55.299271 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:57:55.303203 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (677) Sep 16 04:57:55.305216 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:57:55.305248 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:55.310096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:57:55.310125 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:57:55.311691 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:57:55.312689 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:57:55.314143 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:57:55.315245 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:57:55.317393 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:57:55.363186 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 16 04:57:55.363236 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:55.363247 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:55.368341 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:57:55.368376 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:57:55.375059 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:55.375510 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:57:55.378339 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:57:55.643164 ignition[757]: Ignition 2.22.0 Sep 16 04:57:55.643187 ignition[757]: Stage: fetch-offline Sep 16 04:57:55.643248 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:55.643263 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:57:55.643402 ignition[757]: parsed url from cmdline: "" Sep 16 04:57:55.643407 ignition[757]: no config URL provided Sep 16 04:57:55.643415 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:57:55.643426 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:57:55.643455 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 16 04:57:55.643461 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 16 04:57:55.655502 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 16 04:57:55.656287 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:57:55.660573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:57:55.704784 ignition[757]: parsing config with SHA512: d8532d8fa87d756281502130f58c269ca58143d47dd6434627e3e9d370ad5bbd66e7e78c52a598260ad1cd4bcee2cf61a7574e1212eeed602bbe803522372c75 Sep 16 04:57:55.712630 unknown[757]: fetched base config from "system" Sep 16 04:57:55.712644 unknown[757]: fetched user config from "qemu" Sep 16 04:57:55.713100 ignition[757]: fetch-offline: fetch-offline passed Sep 16 04:57:55.713156 ignition[757]: Ignition finished successfully Sep 16 04:57:55.718015 systemd-networkd[855]: lo: Link UP Sep 16 04:57:55.718039 systemd-networkd[855]: lo: Gained carrier Sep 16 04:57:55.719925 systemd-networkd[855]: Enumeration completed Sep 16 04:57:55.720412 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:55.720416 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:57:55.720425 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:57:55.722096 systemd[1]: Reached target network.target - Network. Sep 16 04:57:55.722752 systemd-networkd[855]: eth0: Link UP Sep 16 04:57:55.723138 systemd-networkd[855]: eth0: Gained carrier Sep 16 04:57:55.723154 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:55.730199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:57:55.730948 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 16 04:57:55.733575 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:57:55.753126 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:57:55.782501 ignition[859]: Ignition 2.22.0 Sep 16 04:57:55.782514 ignition[859]: Stage: kargs Sep 16 04:57:55.782670 ignition[859]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:55.782682 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:57:55.786843 ignition[859]: kargs: kargs passed Sep 16 04:57:55.786890 ignition[859]: Ignition finished successfully Sep 16 04:57:55.791297 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:57:55.793966 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:57:55.917152 ignition[867]: Ignition 2.22.0 Sep 16 04:57:55.917166 ignition[867]: Stage: disks Sep 16 04:57:55.917337 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:55.917350 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:57:55.921324 ignition[867]: disks: disks passed Sep 16 04:57:55.921396 ignition[867]: Ignition finished successfully Sep 16 04:57:55.927442 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:57:55.929006 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:57:55.931256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:57:55.932566 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:57:55.933656 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:57:55.936014 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:57:55.938327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:57:55.966043 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:57:55.974702 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:57:55.976690 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:57:56.140073 kernel: EXT4-fs (vda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:57:56.140964 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:57:56.141887 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:57:56.144191 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:57:56.148104 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:57:56.148681 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:57:56.148731 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:57:56.148757 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:57:56.177926 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:57:56.181412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:57:56.187434 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 16 04:57:56.187470 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:56.187497 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:56.191572 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:57:56.191602 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:57:56.194269 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:57:56.226400 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:57:56.232612 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:57:56.238704 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:57:56.243075 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:57:56.352874 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:57:56.357269 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:57:56.360538 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:57:56.383353 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:57:56.384739 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:56.401689 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:57:56.573818 ignition[1000]: INFO : Ignition 2.22.0 Sep 16 04:57:56.573818 ignition[1000]: INFO : Stage: mount Sep 16 04:57:56.575720 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:56.575720 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:57:56.575720 ignition[1000]: INFO : mount: mount passed Sep 16 04:57:56.575720 ignition[1000]: INFO : Ignition finished successfully Sep 16 04:57:56.578253 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:57:56.580692 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:57:56.611609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:57:56.643928 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 16 04:57:56.643995 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:56.644012 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:56.648177 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:57:56.648229 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:57:56.650251 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:57:56.693084 ignition[1030]: INFO : Ignition 2.22.0 Sep 16 04:57:56.693084 ignition[1030]: INFO : Stage: files Sep 16 04:57:56.695150 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:56.695150 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:57:56.695150 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:57:56.695150 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:57:56.695150 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:57:56.702433 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:57:56.702433 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:57:56.702433 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:57:56.702433 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:57:56.702433 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 16 04:57:56.698000 unknown[1030]: wrote ssh authorized keys file for user: core Sep 16 04:57:56.763947 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:57:56.863324 systemd-networkd[855]: eth0: Gained IPv6LL Sep 16 04:57:57.100064 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:57:57.102182 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:57:57.102182 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 16 04:57:57.201413 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:57:57.325942 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:57:57.325942 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:57:57.329727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:57:57.329727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:57:57.329727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:57:57.329727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:57:57.329727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:57:57.329727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:57:57.329727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:57:57.342131 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:57:57.342131 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:57:57.342131 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:57:57.342131 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:57:57.342131 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:57:57.342131 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 16 04:57:57.674262 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:57:58.610956 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:57:58.610956 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:57:58.615325 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:57:58.622211 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:57:58.622211 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:57:58.622211 ignition[1030]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 16 04:57:58.627205 ignition[1030]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:57:58.627205 ignition[1030]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:57:58.627205 ignition[1030]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 16 04:57:58.627205 ignition[1030]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 16 04:57:58.650658 ignition[1030]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:57:58.656133 ignition[1030]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:57:58.657927 ignition[1030]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 16 04:57:58.657927 ignition[1030]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:57:58.657927 ignition[1030]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:57:58.657927 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:57:58.657927 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:57:58.657927 ignition[1030]: INFO : files: files passed Sep 16 04:57:58.657927 ignition[1030]: INFO : Ignition finished successfully Sep 16 04:57:58.660802 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:57:58.663436 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:57:58.669099 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:57:58.687578 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:57:58.687774 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:57:58.692117 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 16 04:57:58.696296 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:58.696296 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:58.700043 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:58.703282 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:57:58.703830 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:57:58.708017 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:57:58.789350 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:57:58.790625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:57:58.793960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:57:58.796003 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:57:58.798100 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:57:58.800537 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:57:58.845426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:57:58.849274 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:57:58.873545 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:58.875928 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:58.878376 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:57:58.880300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:57:58.881399 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:57:58.884175 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:57:58.886281 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:57:58.888195 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:57:58.890483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:57:58.892838 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:57:58.895128 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:57:58.897449 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:57:58.899566 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:57:58.902139 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:57:58.904328 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:57:58.906471 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:57:58.908175 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:57:58.909283 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:57:58.911689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:58.913962 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:58.916434 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:57:58.917474 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:58.920154 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:57:58.921219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:57:58.923546 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:57:58.924675 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:57:58.927150 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:57:58.928962 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:57:58.930168 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:58.933007 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:57:58.935273 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:57:58.935873 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:57:58.935997 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:57:58.938020 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:57:58.938163 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:57:58.941769 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:57:58.941924 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:57:58.942569 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:57:58.942704 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:57:58.950967 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:57:58.951550 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:57:58.951714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:58.954729 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:57:58.956555 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:57:58.956715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:58.959612 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:57:58.959718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:57:58.969098 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:57:58.975261 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:57:58.998980 ignition[1086]: INFO : Ignition 2.22.0 Sep 16 04:57:58.998980 ignition[1086]: INFO : Stage: umount Sep 16 04:57:58.998980 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:58.998980 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:57:58.998562 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:57:59.006249 ignition[1086]: INFO : umount: umount passed Sep 16 04:57:59.006249 ignition[1086]: INFO : Ignition finished successfully Sep 16 04:57:59.006255 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:57:59.006442 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:57:59.007894 systemd[1]: Stopped target network.target - Network. Sep 16 04:57:59.008512 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:57:59.008587 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:57:59.008874 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:57:59.008941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:57:59.009391 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:57:59.009476 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:57:59.009890 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:57:59.009954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:57:59.010510 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:57:59.010992 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:57:59.022563 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:57:59.024390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:57:59.034630 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:57:59.034925 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:57:59.035102 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:57:59.039199 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:57:59.040462 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:57:59.041163 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:57:59.041236 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:59.045142 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:57:59.045558 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:57:59.045627 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:57:59.045982 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:57:59.046068 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:59.052391 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:57:59.052460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:59.052973 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:57:59.053054 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:59.057230 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:59.059064 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:57:59.059153 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:59.089926 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:57:59.090198 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:59.091110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:57:59.091163 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:59.094121 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:57:59.094166 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:59.094476 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:57:59.094533 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:57:59.100681 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:57:59.100758 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:57:59.104989 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:57:59.105066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:57:59.109961 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:57:59.110489 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:57:59.110554 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:59.116208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:57:59.116285 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:59.120228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:59.120306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:59.125469 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:57:59.125552 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:57:59.125622 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:59.126004 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:57:59.138195 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:57:59.148013 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:57:59.148186 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:57:59.196193 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:57:59.196377 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:57:59.197514 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:57:59.199639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:57:59.199728 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:57:59.202858 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:57:59.223589 systemd[1]: Switching root. Sep 16 04:57:59.265261 systemd-journald[218]: Journal stopped Sep 16 04:58:00.528397 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 16 04:58:00.528479 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:58:00.528497 kernel: SELinux: policy capability open_perms=1 Sep 16 04:58:00.528511 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:58:00.528530 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:58:00.528543 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:58:00.528557 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:58:00.528587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:58:00.528601 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:58:00.528615 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:58:00.528629 kernel: audit: type=1403 audit(1757998679.692:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:58:00.528644 systemd[1]: Successfully loaded SELinux policy in 68.699ms. Sep 16 04:58:00.528667 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.992ms. Sep 16 04:58:00.528688 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:58:00.528703 systemd[1]: Detected virtualization kvm. Sep 16 04:58:00.528722 systemd[1]: Detected architecture x86-64. Sep 16 04:58:00.528737 systemd[1]: Detected first boot. Sep 16 04:58:00.528751 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:58:00.528766 zram_generator::config[1132]: No configuration found. Sep 16 04:58:00.528781 kernel: Guest personality initialized and is inactive Sep 16 04:58:00.528795 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 04:58:00.528809 kernel: Initialized host personality Sep 16 04:58:00.528822 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:58:00.528837 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:58:00.528858 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:58:00.528879 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:58:00.528893 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:58:00.528908 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:58:00.528923 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:58:00.528937 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:58:00.528952 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:58:00.528966 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:58:00.528981 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:58:00.529003 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:58:00.529018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:58:00.529049 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:58:00.529064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:58:00.529079 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:58:00.529093 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:58:00.529107 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:58:00.529129 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:58:00.529155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:58:00.529173 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:58:00.529192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:58:00.529215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:58:00.529233 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:58:00.529251 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:58:00.529270 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:58:00.529288 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:58:00.529312 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:58:00.529331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:58:00.529349 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:58:00.529364 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:58:00.529390 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:58:00.529408 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:58:00.529422 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:58:00.529437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:58:00.529454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:58:00.529475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:58:00.529490 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:58:00.529504 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:58:00.529519 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:58:00.529533 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:58:00.529549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:58:00.529564 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:58:00.529579 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:58:00.529593 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:58:00.529614 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:58:00.529629 systemd[1]: Reached target machines.target - Containers. Sep 16 04:58:00.529643 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:58:00.529658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:58:00.529672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:58:00.529687 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:58:00.529702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:58:00.529716 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:58:00.529738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:58:00.529753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:58:00.529768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:58:00.529782 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:58:00.529803 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:58:00.529818 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:58:00.529832 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:58:00.529847 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:58:00.529863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:58:00.529883 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:58:00.529898 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:58:00.529912 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:58:00.529926 kernel: loop: module loaded Sep 16 04:58:00.529940 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:58:00.529960 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:58:00.529974 kernel: fuse: init (API version 7.41) Sep 16 04:58:00.529988 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:58:00.530003 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:58:00.530018 kernel: ACPI: bus type drm_connector registered Sep 16 04:58:00.530048 systemd[1]: Stopped verity-setup.service. Sep 16 04:58:00.530064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:58:00.530088 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:58:00.530103 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:58:00.530119 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:58:00.530159 systemd-journald[1210]: Collecting audit messages is disabled. Sep 16 04:58:00.530185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:58:00.530200 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:58:00.530221 systemd-journald[1210]: Journal started Sep 16 04:58:00.530248 systemd-journald[1210]: Runtime Journal (/run/log/journal/7355e27c3e404e61b7046621c0e30ab5) is 6M, max 48.4M, 42.4M free. Sep 16 04:58:00.259444 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:58:00.277400 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 16 04:58:00.277972 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:58:00.533061 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:58:00.534836 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:58:00.536358 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:58:00.538073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:58:00.540011 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:58:00.540334 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:58:00.542203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:58:00.542488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:58:00.544228 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:58:00.544503 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:58:00.546109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:58:00.546357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:58:00.548092 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:58:00.548438 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:58:00.549910 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:58:00.550159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:58:00.551628 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:58:00.553295 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:58:00.554989 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:58:00.556663 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:58:00.572652 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:58:00.575394 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:58:00.577594 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:58:00.578753 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:58:00.578780 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:58:00.580847 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:58:00.583314 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:58:00.584519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:58:00.597669 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:58:00.601417 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:58:00.602740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:58:00.604961 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:58:00.606232 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:58:00.610512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:58:00.615471 systemd-journald[1210]: Time spent on flushing to /var/log/journal/7355e27c3e404e61b7046621c0e30ab5 is 15.516ms for 1073 entries. Sep 16 04:58:00.615471 systemd-journald[1210]: System Journal (/var/log/journal/7355e27c3e404e61b7046621c0e30ab5) is 8M, max 195.6M, 187.6M free. Sep 16 04:58:00.650058 systemd-journald[1210]: Received client request to flush runtime journal. Sep 16 04:58:00.650120 kernel: loop0: detected capacity change from 0 to 221472 Sep 16 04:58:00.613205 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:58:00.616860 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:58:00.619805 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:58:00.621481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:58:00.622916 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:58:00.628495 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:58:00.632684 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:58:00.637782 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:58:00.654220 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:58:00.657098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:58:00.667057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:58:00.680166 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:58:00.683312 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:58:00.686730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:58:00.695075 kernel: loop1: detected capacity change from 0 to 128016 Sep 16 04:58:00.722173 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 16 04:58:00.722193 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 16 04:58:00.723068 kernel: loop2: detected capacity change from 0 to 110984 Sep 16 04:58:00.727849 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:58:00.764062 kernel: loop3: detected capacity change from 0 to 221472 Sep 16 04:58:00.777075 kernel: loop4: detected capacity change from 0 to 128016 Sep 16 04:58:00.790066 kernel: loop5: detected capacity change from 0 to 110984 Sep 16 04:58:00.800877 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 16 04:58:00.801536 (sd-merge)[1273]: Merged extensions into '/usr'. Sep 16 04:58:00.808048 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:58:00.808070 systemd[1]: Reloading... Sep 16 04:58:00.869091 zram_generator::config[1299]: No configuration found. Sep 16 04:58:00.974114 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:58:01.136809 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:58:01.137439 systemd[1]: Reloading finished in 328 ms. Sep 16 04:58:01.177325 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:58:01.178909 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:58:01.204657 systemd[1]: Starting ensure-sysext.service... Sep 16 04:58:01.206877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:58:01.223045 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:58:01.229152 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:58:01.230102 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:58:01.230140 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:58:01.230474 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:58:01.230743 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:58:01.231305 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:58:01.231323 systemd[1]: Reloading... Sep 16 04:58:01.231867 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:58:01.232241 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 16 04:58:01.232317 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 16 04:58:01.237095 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:58:01.237107 systemd-tmpfiles[1337]: Skipping /boot Sep 16 04:58:01.248013 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:58:01.248039 systemd-tmpfiles[1337]: Skipping /boot Sep 16 04:58:01.271384 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Sep 16 04:58:01.300121 zram_generator::config[1368]: No configuration found. Sep 16 04:58:01.460058 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:58:01.463099 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 16 04:58:01.472056 kernel: ACPI: button: Power Button [PWRF] Sep 16 04:58:01.497062 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 16 04:58:01.499990 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 16 04:58:01.500260 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 16 04:58:01.558414 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:58:01.558652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:58:01.560546 systemd[1]: Reloading finished in 328 ms. Sep 16 04:58:01.575426 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:58:01.578256 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:58:01.638538 systemd[1]: Finished ensure-sysext.service. Sep 16 04:58:01.668303 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:58:01.669123 kernel: kvm_amd: TSC scaling supported Sep 16 04:58:01.669213 kernel: kvm_amd: Nested Virtualization enabled Sep 16 04:58:01.669231 kernel: kvm_amd: Nested Paging enabled Sep 16 04:58:01.669248 kernel: kvm_amd: LBR virtualization supported Sep 16 04:58:01.669933 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:58:01.670766 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 16 04:58:01.670798 kernel: kvm_amd: Virtual GIF supported Sep 16 04:58:01.674794 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:58:01.676512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:58:01.688154 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:58:01.691109 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:58:01.695247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:58:01.698018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:58:01.699499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:58:01.704070 kernel: EDAC MC: Ver: 3.0.0 Sep 16 04:58:01.704398 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:58:01.706139 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:58:01.707624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:58:01.715191 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:58:01.719090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:58:01.723534 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:58:01.726300 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:58:01.730853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:58:01.732172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:58:01.733300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:58:01.735333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:58:01.737057 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:58:01.740465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:58:01.741457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:58:01.741722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:58:01.742683 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:58:01.743466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:58:01.745472 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:58:01.755606 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:58:01.762087 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:58:01.763137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:58:01.763228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:58:01.764747 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:58:01.766734 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:58:01.821795 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:58:01.836523 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:58:01.837851 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:58:01.856680 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:58:01.862762 augenrules[1513]: No rules Sep 16 04:58:01.864905 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:58:01.865320 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:58:02.116411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:58:02.162746 systemd-networkd[1473]: lo: Link UP Sep 16 04:58:02.162755 systemd-networkd[1473]: lo: Gained carrier Sep 16 04:58:02.164518 systemd-networkd[1473]: Enumeration completed Sep 16 04:58:02.164652 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:58:02.164906 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:58:02.164911 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:58:02.165629 systemd-networkd[1473]: eth0: Link UP Sep 16 04:58:02.165784 systemd-networkd[1473]: eth0: Gained carrier Sep 16 04:58:02.165803 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:58:02.167555 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:58:02.172167 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:58:02.183171 systemd-networkd[1473]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:58:02.183173 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:58:02.183658 systemd-resolved[1475]: Positive Trust Anchors: Sep 16 04:58:02.183668 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:58:02.183712 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:58:02.185160 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:58:02.186518 systemd-timesyncd[1476]: Network configuration changed, trying to establish connection. Sep 16 04:58:03.539040 systemd-timesyncd[1476]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 16 04:58:03.539157 systemd-resolved[1475]: Defaulting to hostname 'linux'. Sep 16 04:58:03.539172 systemd-timesyncd[1476]: Initial clock synchronization to Tue 2025-09-16 04:58:03.538760 UTC. Sep 16 04:58:03.541138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:58:03.542520 systemd[1]: Reached target network.target - Network. Sep 16 04:58:03.543516 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:58:03.544822 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:58:03.546081 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:58:03.547484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:58:03.548867 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:58:03.550349 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:58:03.551698 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:58:03.553081 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:58:03.554451 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:58:03.554487 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:58:03.555501 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:58:03.557536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:58:03.560610 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:58:03.564028 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:58:03.565657 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:58:03.567054 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:58:03.573381 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:58:03.574764 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:58:03.577211 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:58:03.578714 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:58:03.581412 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:58:03.582463 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:58:03.583553 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:58:03.583594 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:58:03.585038 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:58:03.587612 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:58:03.590000 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:58:03.592604 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:58:03.595107 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:58:03.596214 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:58:03.606856 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:58:03.610357 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:58:03.611248 jq[1532]: false Sep 16 04:58:03.613274 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:58:03.615596 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:58:03.620457 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:58:03.624884 extend-filesystems[1533]: Found /dev/vda6 Sep 16 04:58:03.626323 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:58:03.627643 oslogin_cache_refresh[1534]: Refreshing passwd entry cache Sep 16 04:58:03.628690 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing passwd entry cache Sep 16 04:58:03.632388 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:58:03.632961 extend-filesystems[1533]: Found /dev/vda9 Sep 16 04:58:03.632994 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:58:03.634504 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:58:03.635714 extend-filesystems[1533]: Checking size of /dev/vda9 Sep 16 04:58:03.639339 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:58:03.640938 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting users, quitting Sep 16 04:58:03.640938 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:58:03.640938 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing group entry cache Sep 16 04:58:03.640335 oslogin_cache_refresh[1534]: Failure getting users, quitting Sep 16 04:58:03.640360 oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:58:03.640425 oslogin_cache_refresh[1534]: Refreshing group entry cache Sep 16 04:58:03.648394 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:58:03.650269 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting groups, quitting Sep 16 04:58:03.650264 oslogin_cache_refresh[1534]: Failure getting groups, quitting Sep 16 04:58:03.650387 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:58:03.650282 oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:58:03.651696 extend-filesystems[1533]: Resized partition /dev/vda9 Sep 16 04:58:03.742402 update_engine[1548]: I20250916 04:58:03.741389 1548 main.cc:92] Flatcar Update Engine starting Sep 16 04:58:03.739875 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:58:03.744345 extend-filesystems[1560]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:58:03.750300 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 16 04:58:03.750334 jq[1553]: true Sep 16 04:58:03.740155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:58:03.740511 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:58:03.740749 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:58:03.744096 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:58:03.744389 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:58:03.749867 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:58:03.751106 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:58:03.799217 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 16 04:58:03.804612 tar[1562]: linux-amd64/helm Sep 16 04:58:03.806742 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:58:03.928546 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 16 04:58:03.928546 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:58:03.928546 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 16 04:58:04.004170 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Sep 16 04:58:04.014220 jq[1567]: true Sep 16 04:58:04.108806 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:58:04.110343 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 04:58:04.110661 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:58:04.111228 systemd-logind[1543]: New seat seat0. Sep 16 04:58:04.118339 dbus-daemon[1530]: [system] SELinux support is enabled Sep 16 04:58:04.121911 update_engine[1548]: I20250916 04:58:04.121767 1548 update_check_scheduler.cc:74] Next update check in 3m37s Sep 16 04:58:04.127231 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:58:04.128962 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:58:04.133334 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:58:04.133799 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:58:04.140893 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:58:04.140936 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:58:04.142280 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:58:04.142304 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:58:04.146323 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:58:04.147455 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 04:58:04.158168 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:58:04.159993 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:58:04.165519 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:58:04.186140 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:58:04.186984 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:58:04.191582 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:58:04.948731 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1252835588 wd_nsec: 1252835526 Sep 16 04:58:04.946027 systemd-networkd[1473]: eth0: Gained IPv6LL Sep 16 04:58:04.962685 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:58:04.964370 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:58:04.968497 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:58:04.972440 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 16 04:58:04.973879 bash[1603]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:58:05.132582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:05.140302 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:58:05.142969 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:58:05.145424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:58:05.156235 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:58:05.161740 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:58:05.164311 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:58:05.165388 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:58:05.218532 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:58:05.221289 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 16 04:58:05.221650 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 16 04:58:05.227179 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:58:05.490989 tar[1562]: linux-amd64/LICENSE Sep 16 04:58:05.491668 tar[1562]: linux-amd64/README.md Sep 16 04:58:05.510078 containerd[1573]: time="2025-09-16T04:58:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:58:05.511761 containerd[1573]: time="2025-09-16T04:58:05.511667222Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:58:05.535014 containerd[1573]: time="2025-09-16T04:58:05.534735505Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=1.59982ms Sep 16 04:58:05.535014 containerd[1573]: time="2025-09-16T04:58:05.534992477Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:58:05.535259 containerd[1573]: time="2025-09-16T04:58:05.535054173Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:58:05.535458 containerd[1573]: time="2025-09-16T04:58:05.535410210Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:58:05.535492 containerd[1573]: time="2025-09-16T04:58:05.535454894Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:58:05.535538 containerd[1573]: time="2025-09-16T04:58:05.535515538Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:58:05.535851 containerd[1573]: time="2025-09-16T04:58:05.535808307Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:58:05.535851 containerd[1573]: time="2025-09-16T04:58:05.535848001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:58:05.536563 containerd[1573]: time="2025-09-16T04:58:05.536501226Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:58:05.536563 containerd[1573]: time="2025-09-16T04:58:05.536545619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:58:05.536650 containerd[1573]: time="2025-09-16T04:58:05.536568041Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:58:05.536650 containerd[1573]: time="2025-09-16T04:58:05.536587147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:58:05.536824 containerd[1573]: time="2025-09-16T04:58:05.536765011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:58:05.537916 containerd[1573]: time="2025-09-16T04:58:05.537180289Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:58:05.537916 containerd[1573]: time="2025-09-16T04:58:05.537514145Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:58:05.537916 containerd[1573]: time="2025-09-16T04:58:05.537533692Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:58:05.537916 containerd[1573]: time="2025-09-16T04:58:05.537594506Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:58:05.538049 containerd[1573]: time="2025-09-16T04:58:05.538009163Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:58:05.538502 containerd[1573]: time="2025-09-16T04:58:05.538459187Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:58:05.549147 containerd[1573]: time="2025-09-16T04:58:05.549040707Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:58:05.549656 containerd[1573]: time="2025-09-16T04:58:05.549270328Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:58:05.549656 containerd[1573]: time="2025-09-16T04:58:05.549457519Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:58:05.549656 containerd[1573]: time="2025-09-16T04:58:05.549507332Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:58:05.549656 containerd[1573]: time="2025-09-16T04:58:05.549554110Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:58:05.549656 containerd[1573]: time="2025-09-16T04:58:05.549591570Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:58:05.549656 containerd[1573]: time="2025-09-16T04:58:05.549635302Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:58:05.549905 containerd[1573]: time="2025-09-16T04:58:05.549670679Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:58:05.549905 containerd[1573]: time="2025-09-16T04:58:05.549689474Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:58:05.549905 containerd[1573]: time="2025-09-16T04:58:05.549750909Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:58:05.549905 containerd[1573]: time="2025-09-16T04:58:05.549772419Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:58:05.549905 containerd[1573]: time="2025-09-16T04:58:05.549817925Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:58:05.550391 containerd[1573]: time="2025-09-16T04:58:05.550097799Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:58:05.550391 containerd[1573]: time="2025-09-16T04:58:05.550361885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:58:05.550528 containerd[1573]: time="2025-09-16T04:58:05.550394666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:58:05.550528 containerd[1573]: time="2025-09-16T04:58:05.550413552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:58:05.550528 containerd[1573]: time="2025-09-16T04:58:05.550463044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:58:05.550528 containerd[1573]: time="2025-09-16T04:58:05.550511716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:58:05.550528 containerd[1573]: time="2025-09-16T04:58:05.550529138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:58:05.550834 containerd[1573]: time="2025-09-16T04:58:05.550545248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:58:05.550834 containerd[1573]: time="2025-09-16T04:58:05.550578381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:58:05.550834 containerd[1573]: time="2025-09-16T04:58:05.550669211Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:58:05.550834 containerd[1573]: time="2025-09-16T04:58:05.550756194Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:58:05.551323 containerd[1573]: time="2025-09-16T04:58:05.551262994Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:58:05.551323 containerd[1573]: time="2025-09-16T04:58:05.551303861Z" level=info msg="Start snapshots syncer" Sep 16 04:58:05.551399 containerd[1573]: time="2025-09-16T04:58:05.551344768Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:58:05.552334 containerd[1573]: time="2025-09-16T04:58:05.552035102Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:58:05.553491 containerd[1573]: time="2025-09-16T04:58:05.552595553Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:58:05.553491 containerd[1573]: time="2025-09-16T04:58:05.553060104Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:58:05.553699 containerd[1573]: time="2025-09-16T04:58:05.553656993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:58:05.553805 containerd[1573]: time="2025-09-16T04:58:05.553750739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:58:05.553864 containerd[1573]: time="2025-09-16T04:58:05.553808207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:58:05.553864 containerd[1573]: time="2025-09-16T04:58:05.553850796Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:58:05.554011 containerd[1573]: time="2025-09-16T04:58:05.553927140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:58:05.554011 containerd[1573]: time="2025-09-16T04:58:05.553979317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:58:05.554220 containerd[1573]: time="2025-09-16T04:58:05.554032617Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:58:05.554422 containerd[1573]: time="2025-09-16T04:58:05.554349131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:58:05.554520 containerd[1573]: time="2025-09-16T04:58:05.554420865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:58:05.554520 containerd[1573]: time="2025-09-16T04:58:05.554489714Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:58:05.554846 containerd[1573]: time="2025-09-16T04:58:05.554753098Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:58:05.555363 containerd[1573]: time="2025-09-16T04:58:05.555297549Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:58:05.555363 containerd[1573]: time="2025-09-16T04:58:05.555348374Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:58:05.555552 containerd[1573]: time="2025-09-16T04:58:05.555391726Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:58:05.555552 containerd[1573]: time="2025-09-16T04:58:05.555419287Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:58:05.555552 containerd[1573]: time="2025-09-16T04:58:05.555461446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:58:05.555552 containerd[1573]: time="2025-09-16T04:58:05.555501932Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:58:05.555780 containerd[1573]: time="2025-09-16T04:58:05.555631225Z" level=info msg="runtime interface created" Sep 16 04:58:05.555780 containerd[1573]: time="2025-09-16T04:58:05.555660910Z" level=info msg="created NRI interface" Sep 16 04:58:05.555780 containerd[1573]: time="2025-09-16T04:58:05.555694123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:58:05.555780 containerd[1573]: time="2025-09-16T04:58:05.555737384Z" level=info msg="Connect containerd service" Sep 16 04:58:05.556010 containerd[1573]: time="2025-09-16T04:58:05.555805421Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:58:05.560496 containerd[1573]: time="2025-09-16T04:58:05.560429341Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:58:05.674744 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:58:05.819099 containerd[1573]: time="2025-09-16T04:58:05.818910593Z" level=info msg="Start subscribing containerd event" Sep 16 04:58:05.819099 containerd[1573]: time="2025-09-16T04:58:05.819003598Z" level=info msg="Start recovering state" Sep 16 04:58:05.819413 containerd[1573]: time="2025-09-16T04:58:05.819238889Z" level=info msg="Start event monitor" Sep 16 04:58:05.819413 containerd[1573]: time="2025-09-16T04:58:05.819249008Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:58:05.819413 containerd[1573]: time="2025-09-16T04:58:05.819282360Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:58:05.819413 containerd[1573]: time="2025-09-16T04:58:05.819304061Z" level=info msg="Start streaming server" Sep 16 04:58:05.819413 containerd[1573]: time="2025-09-16T04:58:05.819325992Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:58:05.819708 containerd[1573]: time="2025-09-16T04:58:05.819592532Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:58:05.820029 containerd[1573]: time="2025-09-16T04:58:05.820004264Z" level=info msg="runtime interface starting up..." Sep 16 04:58:05.820029 containerd[1573]: time="2025-09-16T04:58:05.820025764Z" level=info msg="starting plugins..." Sep 16 04:58:05.820112 containerd[1573]: time="2025-09-16T04:58:05.820058606Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:58:05.821240 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:58:05.823052 containerd[1573]: time="2025-09-16T04:58:05.822998007Z" level=info msg="containerd successfully booted in 0.319348s" Sep 16 04:58:07.389777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:07.392474 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:58:07.401770 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:07.401870 systemd[1]: Startup finished in 3.966s (kernel) + 7.033s (initrd) + 6.427s (userspace) = 17.426s. Sep 16 04:58:07.817272 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:58:07.818641 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:58710.service - OpenSSH per-connection server daemon (10.0.0.1:58710). Sep 16 04:58:07.956817 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 58710 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:58:07.959079 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:07.974290 systemd-logind[1543]: New session 1 of user core. Sep 16 04:58:07.976009 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:58:07.977586 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:58:08.004736 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:58:08.007775 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:58:08.027157 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:58:08.031830 systemd-logind[1543]: New session c1 of user core. Sep 16 04:58:08.212810 systemd[1680]: Queued start job for default target default.target. Sep 16 04:58:08.260703 systemd[1680]: Created slice app.slice - User Application Slice. Sep 16 04:58:08.260737 systemd[1680]: Reached target paths.target - Paths. Sep 16 04:58:08.260786 systemd[1680]: Reached target timers.target - Timers. Sep 16 04:58:08.262533 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:58:08.277597 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:58:08.277723 systemd[1680]: Reached target sockets.target - Sockets. Sep 16 04:58:08.277762 systemd[1680]: Reached target basic.target - Basic System. Sep 16 04:58:08.277803 systemd[1680]: Reached target default.target - Main User Target. Sep 16 04:58:08.277840 systemd[1680]: Startup finished in 235ms. Sep 16 04:58:08.278280 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:58:08.283347 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:58:08.292908 kubelet[1664]: E0916 04:58:08.292836 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:08.297379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:08.297592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:08.298040 systemd[1]: kubelet.service: Consumed 2.836s CPU time, 266.6M memory peak. Sep 16 04:58:08.345434 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:58718.service - OpenSSH per-connection server daemon (10.0.0.1:58718). Sep 16 04:58:08.420748 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 58718 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:58:08.422827 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:08.428302 systemd-logind[1543]: New session 2 of user core. Sep 16 04:58:08.438412 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:58:08.497428 sshd[1696]: Connection closed by 10.0.0.1 port 58718 Sep 16 04:58:08.497919 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:08.512689 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:58718.service: Deactivated successfully. Sep 16 04:58:08.514972 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:58:08.515882 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:58:08.520553 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:58728.service - OpenSSH per-connection server daemon (10.0.0.1:58728). Sep 16 04:58:08.521540 systemd-logind[1543]: Removed session 2. Sep 16 04:58:08.587393 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 58728 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:58:08.589797 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:08.597204 systemd-logind[1543]: New session 3 of user core. Sep 16 04:58:08.605694 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:58:08.658308 sshd[1705]: Connection closed by 10.0.0.1 port 58728 Sep 16 04:58:08.658901 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:08.677568 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:58728.service: Deactivated successfully. Sep 16 04:58:08.680032 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:58:08.681155 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:58:08.687112 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:58744.service - OpenSSH per-connection server daemon (10.0.0.1:58744). Sep 16 04:58:08.688808 systemd-logind[1543]: Removed session 3. Sep 16 04:58:08.754503 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 58744 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:58:08.756569 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:08.762792 systemd-logind[1543]: New session 4 of user core. Sep 16 04:58:08.777606 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:58:08.836132 sshd[1714]: Connection closed by 10.0.0.1 port 58744 Sep 16 04:58:08.836684 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:08.847044 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:58744.service: Deactivated successfully. Sep 16 04:58:08.849625 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:58:08.850629 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:58:08.854246 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:58748.service - OpenSSH per-connection server daemon (10.0.0.1:58748). Sep 16 04:58:08.855125 systemd-logind[1543]: Removed session 4. Sep 16 04:58:08.928215 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 58748 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:58:08.930414 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:08.936043 systemd-logind[1543]: New session 5 of user core. Sep 16 04:58:08.945339 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:58:09.009044 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:58:09.009524 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:09.025506 sudo[1724]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:09.027855 sshd[1723]: Connection closed by 10.0.0.1 port 58748 Sep 16 04:58:09.028439 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:09.038836 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:58748.service: Deactivated successfully. Sep 16 04:58:09.040941 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:58:09.042221 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:58:09.045558 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:58764.service - OpenSSH per-connection server daemon (10.0.0.1:58764). Sep 16 04:58:09.046420 systemd-logind[1543]: Removed session 5. Sep 16 04:58:09.111583 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 58764 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:58:09.113545 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:09.118787 systemd-logind[1543]: New session 6 of user core. Sep 16 04:58:09.128559 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:58:09.186040 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:58:09.186500 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:09.671060 sudo[1735]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:09.678214 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:58:09.678551 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:09.691236 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:58:09.751863 augenrules[1757]: No rules Sep 16 04:58:09.753688 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:58:09.754000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:58:09.755388 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:09.757071 sshd[1733]: Connection closed by 10.0.0.1 port 58764 Sep 16 04:58:09.757485 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:09.770039 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:58764.service: Deactivated successfully. Sep 16 04:58:09.771922 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:58:09.772809 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:58:09.775604 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:58780.service - OpenSSH per-connection server daemon (10.0.0.1:58780). Sep 16 04:58:09.776244 systemd-logind[1543]: Removed session 6. Sep 16 04:58:09.831232 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 58780 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:58:09.832907 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:09.837817 systemd-logind[1543]: New session 7 of user core. Sep 16 04:58:09.847469 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:58:09.903675 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:58:09.904066 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:10.746639 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:58:10.773151 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:58:11.453020 dockerd[1792]: time="2025-09-16T04:58:11.452932323Z" level=info msg="Starting up" Sep 16 04:58:11.453864 dockerd[1792]: time="2025-09-16T04:58:11.453841878Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:58:11.576981 dockerd[1792]: time="2025-09-16T04:58:11.576911484Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:58:11.890675 dockerd[1792]: time="2025-09-16T04:58:11.890577036Z" level=info msg="Loading containers: start." Sep 16 04:58:11.904310 kernel: Initializing XFRM netlink socket Sep 16 04:58:12.232865 systemd-networkd[1473]: docker0: Link UP Sep 16 04:58:12.239099 dockerd[1792]: time="2025-09-16T04:58:12.239033763Z" level=info msg="Loading containers: done." Sep 16 04:58:12.281679 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck489846401-merged.mount: Deactivated successfully. Sep 16 04:58:12.282280 dockerd[1792]: time="2025-09-16T04:58:12.282065988Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:58:12.282280 dockerd[1792]: time="2025-09-16T04:58:12.282215058Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:58:12.282391 dockerd[1792]: time="2025-09-16T04:58:12.282372363Z" level=info msg="Initializing buildkit" Sep 16 04:58:12.322159 dockerd[1792]: time="2025-09-16T04:58:12.322064525Z" level=info msg="Completed buildkit initialization" Sep 16 04:58:12.328960 dockerd[1792]: time="2025-09-16T04:58:12.328902336Z" level=info msg="Daemon has completed initialization" Sep 16 04:58:12.329132 dockerd[1792]: time="2025-09-16T04:58:12.329029034Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:58:12.329395 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:58:13.457431 containerd[1573]: time="2025-09-16T04:58:13.457374090Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 16 04:58:17.120719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074322679.mount: Deactivated successfully. Sep 16 04:58:18.133539 containerd[1573]: time="2025-09-16T04:58:18.133468877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:18.134212 containerd[1573]: time="2025-09-16T04:58:18.134168348Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 16 04:58:18.135460 containerd[1573]: time="2025-09-16T04:58:18.135429162Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:18.137879 containerd[1573]: time="2025-09-16T04:58:18.137842307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:18.138748 containerd[1573]: time="2025-09-16T04:58:18.138721476Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 4.681303583s" Sep 16 04:58:18.138789 containerd[1573]: time="2025-09-16T04:58:18.138752484Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 16 04:58:18.139687 containerd[1573]: time="2025-09-16T04:58:18.139666818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 16 04:58:18.482457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:58:18.484569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:18.877125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:18.895010 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:20.403512 kubelet[2076]: E0916 04:58:20.403431 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:20.411346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:20.411545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:20.411926 systemd[1]: kubelet.service: Consumed 453ms CPU time, 110.8M memory peak. Sep 16 04:58:22.346636 containerd[1573]: time="2025-09-16T04:58:22.346552518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:22.528073 containerd[1573]: time="2025-09-16T04:58:22.527960127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 16 04:58:22.613741 containerd[1573]: time="2025-09-16T04:58:22.613558133Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:22.730157 containerd[1573]: time="2025-09-16T04:58:22.730086223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:22.731355 containerd[1573]: time="2025-09-16T04:58:22.731320708Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 4.591628001s" Sep 16 04:58:22.731441 containerd[1573]: time="2025-09-16T04:58:22.731357617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 16 04:58:22.732270 containerd[1573]: time="2025-09-16T04:58:22.732218451Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 16 04:58:24.747656 containerd[1573]: time="2025-09-16T04:58:24.747557852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:24.802455 containerd[1573]: time="2025-09-16T04:58:24.802340219Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 16 04:58:24.840389 containerd[1573]: time="2025-09-16T04:58:24.840293662Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:24.880599 containerd[1573]: time="2025-09-16T04:58:24.880523082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:24.881984 containerd[1573]: time="2025-09-16T04:58:24.881897569Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.149645304s" Sep 16 04:58:24.881984 containerd[1573]: time="2025-09-16T04:58:24.881964094Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 16 04:58:24.882795 containerd[1573]: time="2025-09-16T04:58:24.882754696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 16 04:58:26.309639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201666631.mount: Deactivated successfully. Sep 16 04:58:27.154441 containerd[1573]: time="2025-09-16T04:58:27.154349025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:27.155138 containerd[1573]: time="2025-09-16T04:58:27.155082039Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 16 04:58:27.156566 containerd[1573]: time="2025-09-16T04:58:27.156504627Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:27.159497 containerd[1573]: time="2025-09-16T04:58:27.159448466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:27.160037 containerd[1573]: time="2025-09-16T04:58:27.159987076Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.277184059s" Sep 16 04:58:27.160085 containerd[1573]: time="2025-09-16T04:58:27.160040587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 16 04:58:27.160657 containerd[1573]: time="2025-09-16T04:58:27.160613781Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:58:28.825086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289666431.mount: Deactivated successfully. Sep 16 04:58:29.928819 containerd[1573]: time="2025-09-16T04:58:29.928732924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:29.929612 containerd[1573]: time="2025-09-16T04:58:29.929586524Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 16 04:58:29.931208 containerd[1573]: time="2025-09-16T04:58:29.931139196Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:29.934821 containerd[1573]: time="2025-09-16T04:58:29.934780012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:29.935952 containerd[1573]: time="2025-09-16T04:58:29.935887288Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.775234574s" Sep 16 04:58:29.935952 containerd[1573]: time="2025-09-16T04:58:29.935939095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:58:29.936538 containerd[1573]: time="2025-09-16T04:58:29.936475942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:58:30.372403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659535378.mount: Deactivated successfully. Sep 16 04:58:30.380539 containerd[1573]: time="2025-09-16T04:58:30.380483266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:30.381420 containerd[1573]: time="2025-09-16T04:58:30.381376901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 16 04:58:30.382618 containerd[1573]: time="2025-09-16T04:58:30.382551564Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:30.384943 containerd[1573]: time="2025-09-16T04:58:30.384895549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:30.385537 containerd[1573]: time="2025-09-16T04:58:30.385491446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 448.986028ms" Sep 16 04:58:30.385537 containerd[1573]: time="2025-09-16T04:58:30.385524057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:58:30.386041 containerd[1573]: time="2025-09-16T04:58:30.385996513Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 16 04:58:30.482545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:58:30.484796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:30.720224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:30.750920 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:30.802599 kubelet[2165]: E0916 04:58:30.802238 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:30.808496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:30.808700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:30.809370 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.1M memory peak. Sep 16 04:58:31.414353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1497750387.mount: Deactivated successfully. Sep 16 04:58:33.651946 containerd[1573]: time="2025-09-16T04:58:33.651851470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:33.654431 containerd[1573]: time="2025-09-16T04:58:33.654340938Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 16 04:58:33.658497 containerd[1573]: time="2025-09-16T04:58:33.658439863Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:33.662987 containerd[1573]: time="2025-09-16T04:58:33.662726771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:33.664346 containerd[1573]: time="2025-09-16T04:58:33.664269724Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.278238436s" Sep 16 04:58:33.664346 containerd[1573]: time="2025-09-16T04:58:33.664332833Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 16 04:58:37.095894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:37.096080 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.1M memory peak. Sep 16 04:58:37.098417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:37.127840 systemd[1]: Reload requested from client PID 2258 ('systemctl') (unit session-7.scope)... Sep 16 04:58:37.127867 systemd[1]: Reloading... Sep 16 04:58:37.234247 zram_generator::config[2309]: No configuration found. Sep 16 04:58:37.545149 systemd[1]: Reloading finished in 416 ms. Sep 16 04:58:37.610966 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:58:37.611093 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:58:37.611705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:37.611780 systemd[1]: kubelet.service: Consumed 179ms CPU time, 98.2M memory peak. Sep 16 04:58:37.613694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:37.833137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:37.842537 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:37.904875 kubelet[2348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:37.904875 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:37.904875 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:37.905500 kubelet[2348]: I0916 04:58:37.904934 2348 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:38.206828 kubelet[2348]: I0916 04:58:38.206657 2348 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:58:38.206828 kubelet[2348]: I0916 04:58:38.206708 2348 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:38.207103 kubelet[2348]: I0916 04:58:38.207072 2348 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:58:38.233136 kubelet[2348]: E0916 04:58:38.233093 2348 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:38.235377 kubelet[2348]: I0916 04:58:38.235353 2348 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:38.244652 kubelet[2348]: I0916 04:58:38.244621 2348 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:38.252934 kubelet[2348]: I0916 04:58:38.252901 2348 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:38.253870 kubelet[2348]: I0916 04:58:38.253827 2348 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:58:38.254087 kubelet[2348]: I0916 04:58:38.254032 2348 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:38.254343 kubelet[2348]: I0916 04:58:38.254074 2348 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:38.254459 kubelet[2348]: I0916 04:58:38.254349 2348 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:38.254459 kubelet[2348]: I0916 04:58:38.254361 2348 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:58:38.254564 kubelet[2348]: I0916 04:58:38.254543 2348 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:38.258418 kubelet[2348]: I0916 04:58:38.258361 2348 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:58:38.258418 kubelet[2348]: I0916 04:58:38.258397 2348 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:38.258622 kubelet[2348]: I0916 04:58:38.258446 2348 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:58:38.258622 kubelet[2348]: I0916 04:58:38.258469 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:38.262210 kubelet[2348]: I0916 04:58:38.262169 2348 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:38.262894 kubelet[2348]: I0916 04:58:38.262852 2348 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:58:38.262969 kubelet[2348]: W0916 04:58:38.262948 2348 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:58:38.265455 kubelet[2348]: I0916 04:58:38.265416 2348 server.go:1274] "Started kubelet" Sep 16 04:58:38.266426 kubelet[2348]: W0916 04:58:38.266348 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:38.266491 kubelet[2348]: E0916 04:58:38.266428 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:38.266578 kubelet[2348]: I0916 04:58:38.266513 2348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:38.267052 kubelet[2348]: I0916 04:58:38.267019 2348 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:38.267113 kubelet[2348]: I0916 04:58:38.267087 2348 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:38.267268 kubelet[2348]: I0916 04:58:38.267249 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:38.267516 kubelet[2348]: W0916 04:58:38.267073 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:38.267516 kubelet[2348]: E0916 04:58:38.267517 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:38.268826 kubelet[2348]: I0916 04:58:38.268785 2348 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:58:38.269198 kubelet[2348]: I0916 04:58:38.269156 2348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:38.275592 kubelet[2348]: E0916 04:58:38.275530 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.275592 kubelet[2348]: I0916 04:58:38.275555 2348 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:58:38.275679 kubelet[2348]: I0916 04:58:38.275654 2348 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:58:38.275797 kubelet[2348]: I0916 04:58:38.275769 2348 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:38.276327 kubelet[2348]: W0916 04:58:38.276293 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:38.276617 kubelet[2348]: E0916 04:58:38.276583 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:38.277551 kubelet[2348]: I0916 04:58:38.277497 2348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:38.279339 kubelet[2348]: E0916 04:58:38.276026 2348 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1865aa851dcd022f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:58:38.265393711 +0000 UTC m=+0.417949555,LastTimestamp:2025-09-16 04:58:38.265393711 +0000 UTC m=+0.417949555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:58:38.280027 kubelet[2348]: E0916 04:58:38.279759 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Sep 16 04:58:38.280732 kubelet[2348]: I0916 04:58:38.280702 2348 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:58:38.280783 kubelet[2348]: I0916 04:58:38.280750 2348 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:58:38.285233 kubelet[2348]: E0916 04:58:38.284946 2348 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:38.301297 kubelet[2348]: I0916 04:58:38.301269 2348 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:58:38.301297 kubelet[2348]: I0916 04:58:38.301289 2348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:38.301431 kubelet[2348]: I0916 04:58:38.301310 2348 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:38.307377 kubelet[2348]: I0916 04:58:38.307317 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:38.308818 kubelet[2348]: I0916 04:58:38.308752 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:38.308818 kubelet[2348]: I0916 04:58:38.308785 2348 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:58:38.308818 kubelet[2348]: I0916 04:58:38.308812 2348 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:58:38.308908 kubelet[2348]: E0916 04:58:38.308855 2348 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:58:38.309667 kubelet[2348]: W0916 04:58:38.309600 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:38.309667 kubelet[2348]: E0916 04:58:38.309643 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:38.375841 kubelet[2348]: E0916 04:58:38.375770 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.409322 kubelet[2348]: E0916 04:58:38.409254 2348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:58:38.476609 kubelet[2348]: E0916 04:58:38.476485 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.481174 kubelet[2348]: E0916 04:58:38.481127 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Sep 16 04:58:38.577738 kubelet[2348]: E0916 04:58:38.577658 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.609927 kubelet[2348]: E0916 04:58:38.609867 2348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:58:38.678390 kubelet[2348]: E0916 04:58:38.678330 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.778470 kubelet[2348]: E0916 04:58:38.778414 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.879003 kubelet[2348]: E0916 04:58:38.878942 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.882493 kubelet[2348]: E0916 04:58:38.882440 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Sep 16 04:58:38.953515 kubelet[2348]: I0916 04:58:38.953444 2348 policy_none.go:49] "None policy: Start" Sep 16 04:58:38.954549 kubelet[2348]: I0916 04:58:38.954523 2348 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:58:38.954636 kubelet[2348]: I0916 04:58:38.954556 2348 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:38.963924 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:58:38.977383 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:58:38.979471 kubelet[2348]: E0916 04:58:38.979422 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:38.982169 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:58:38.992331 kubelet[2348]: I0916 04:58:38.992219 2348 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:58:38.992519 kubelet[2348]: I0916 04:58:38.992502 2348 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:38.992590 kubelet[2348]: I0916 04:58:38.992523 2348 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:38.992893 kubelet[2348]: I0916 04:58:38.992864 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:38.994344 kubelet[2348]: E0916 04:58:38.994273 2348 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 16 04:58:39.020535 systemd[1]: Created slice kubepods-burstable-pod424aaebaf19afe11fac880b43002720b.slice - libcontainer container kubepods-burstable-pod424aaebaf19afe11fac880b43002720b.slice. Sep 16 04:58:39.049900 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 16 04:58:39.067248 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 16 04:58:39.078465 kubelet[2348]: I0916 04:58:39.078434 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:39.078531 kubelet[2348]: I0916 04:58:39.078470 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:39.078531 kubelet[2348]: I0916 04:58:39.078489 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:39.078531 kubelet[2348]: I0916 04:58:39.078511 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:39.078531 kubelet[2348]: I0916 04:58:39.078527 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:39.078631 kubelet[2348]: I0916 04:58:39.078548 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:39.078631 kubelet[2348]: I0916 04:58:39.078568 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:39.078758 kubelet[2348]: I0916 04:58:39.078711 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:39.078796 kubelet[2348]: I0916 04:58:39.078777 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:58:39.094206 kubelet[2348]: I0916 04:58:39.094148 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:58:39.094721 kubelet[2348]: E0916 04:58:39.094680 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 16 04:58:39.121521 kubelet[2348]: W0916 04:58:39.121458 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:39.121692 kubelet[2348]: E0916 04:58:39.121525 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:39.132779 kubelet[2348]: W0916 04:58:39.132638 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:39.132779 kubelet[2348]: E0916 04:58:39.132787 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:39.296841 kubelet[2348]: I0916 04:58:39.296787 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:58:39.297285 kubelet[2348]: E0916 04:58:39.297244 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 16 04:58:39.349357 kubelet[2348]: E0916 04:58:39.349181 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:39.350088 containerd[1573]: time="2025-09-16T04:58:39.350045680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:424aaebaf19afe11fac880b43002720b,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:39.352457 kubelet[2348]: W0916 04:58:39.352427 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:39.352542 kubelet[2348]: E0916 04:58:39.352466 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:39.365108 kubelet[2348]: E0916 04:58:39.365063 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:39.365715 containerd[1573]: time="2025-09-16T04:58:39.365662252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:39.371212 kubelet[2348]: E0916 04:58:39.370114 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:39.371316 containerd[1573]: time="2025-09-16T04:58:39.370637095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:39.379565 containerd[1573]: time="2025-09-16T04:58:39.379499929Z" level=info msg="connecting to shim 37450f6d63491acec2291ff672594661b7852b16f324a5bd61f093a25e3ca65e" address="unix:///run/containerd/s/8093dc1652cc45f28d7ac7fae21d2f4cc12e9e36f5737e307034b0589b56701b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:39.415423 containerd[1573]: time="2025-09-16T04:58:39.415359606Z" level=info msg="connecting to shim 7f33534e861f5461d1d45110a1a76becace96da7db1e775e4fdd1629c63d1351" address="unix:///run/containerd/s/3f054fef67e1a6953fec5eaf57f471d8d950161eb6d34d1ede31963d3a489b2b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:39.416562 systemd[1]: Started cri-containerd-37450f6d63491acec2291ff672594661b7852b16f324a5bd61f093a25e3ca65e.scope - libcontainer container 37450f6d63491acec2291ff672594661b7852b16f324a5bd61f093a25e3ca65e. Sep 16 04:58:39.417358 containerd[1573]: time="2025-09-16T04:58:39.417325028Z" level=info msg="connecting to shim 1734efa11e8088c069248252a01e244aa0d914e9e230bbf9c7e630b429a6602c" address="unix:///run/containerd/s/a48283da385ac25f8f4fd0964269e0ee6bd2373c40614d40ba37ba85b09f43f4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:39.516339 systemd[1]: Started cri-containerd-1734efa11e8088c069248252a01e244aa0d914e9e230bbf9c7e630b429a6602c.scope - libcontainer container 1734efa11e8088c069248252a01e244aa0d914e9e230bbf9c7e630b429a6602c. Sep 16 04:58:39.536418 systemd[1]: Started cri-containerd-7f33534e861f5461d1d45110a1a76becace96da7db1e775e4fdd1629c63d1351.scope - libcontainer container 7f33534e861f5461d1d45110a1a76becace96da7db1e775e4fdd1629c63d1351. Sep 16 04:58:39.560734 kubelet[2348]: W0916 04:58:39.560575 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 16 04:58:39.560893 kubelet[2348]: E0916 04:58:39.560824 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:39.579174 containerd[1573]: time="2025-09-16T04:58:39.579108310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:424aaebaf19afe11fac880b43002720b,Namespace:kube-system,Attempt:0,} returns sandbox id \"37450f6d63491acec2291ff672594661b7852b16f324a5bd61f093a25e3ca65e\"" Sep 16 04:58:39.580970 kubelet[2348]: E0916 04:58:39.580930 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:39.583453 containerd[1573]: time="2025-09-16T04:58:39.583404513Z" level=info msg="CreateContainer within sandbox \"37450f6d63491acec2291ff672594661b7852b16f324a5bd61f093a25e3ca65e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:58:39.597214 containerd[1573]: time="2025-09-16T04:58:39.596365010Z" level=info msg="Container bbdda3014e61a6aca094429b1e9c72899767d0a14ad1dbd6d07b24bc2aea2731: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:39.610953 containerd[1573]: time="2025-09-16T04:58:39.610792476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f33534e861f5461d1d45110a1a76becace96da7db1e775e4fdd1629c63d1351\"" Sep 16 04:58:39.612909 kubelet[2348]: E0916 04:58:39.612864 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:39.616244 containerd[1573]: time="2025-09-16T04:58:39.616147916Z" level=info msg="CreateContainer within sandbox \"37450f6d63491acec2291ff672594661b7852b16f324a5bd61f093a25e3ca65e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bbdda3014e61a6aca094429b1e9c72899767d0a14ad1dbd6d07b24bc2aea2731\"" Sep 16 04:58:39.616403 containerd[1573]: time="2025-09-16T04:58:39.616373578Z" level=info msg="CreateContainer within sandbox \"7f33534e861f5461d1d45110a1a76becace96da7db1e775e4fdd1629c63d1351\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:58:39.616788 containerd[1573]: time="2025-09-16T04:58:39.616756621Z" level=info msg="StartContainer for \"bbdda3014e61a6aca094429b1e9c72899767d0a14ad1dbd6d07b24bc2aea2731\"" Sep 16 04:58:39.619024 containerd[1573]: time="2025-09-16T04:58:39.618984084Z" level=info msg="connecting to shim bbdda3014e61a6aca094429b1e9c72899767d0a14ad1dbd6d07b24bc2aea2731" address="unix:///run/containerd/s/8093dc1652cc45f28d7ac7fae21d2f4cc12e9e36f5737e307034b0589b56701b" protocol=ttrpc version=3 Sep 16 04:58:39.620840 containerd[1573]: time="2025-09-16T04:58:39.620341412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1734efa11e8088c069248252a01e244aa0d914e9e230bbf9c7e630b429a6602c\"" Sep 16 04:58:39.621814 kubelet[2348]: E0916 04:58:39.621787 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:39.624085 containerd[1573]: time="2025-09-16T04:58:39.624027779Z" level=info msg="CreateContainer within sandbox \"1734efa11e8088c069248252a01e244aa0d914e9e230bbf9c7e630b429a6602c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:58:39.633536 containerd[1573]: time="2025-09-16T04:58:39.633482696Z" level=info msg="Container 24db1c37b691d3c92882e5502497d23b360a139c36a45e99195e1c4474cadb18: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:39.638913 containerd[1573]: time="2025-09-16T04:58:39.638855991Z" level=info msg="Container 20958b1a74200695025f964b997627d9b63d4ccaf6e7996b690cea1af08056c5: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:39.648163 containerd[1573]: time="2025-09-16T04:58:39.648108110Z" level=info msg="CreateContainer within sandbox \"7f33534e861f5461d1d45110a1a76becace96da7db1e775e4fdd1629c63d1351\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24db1c37b691d3c92882e5502497d23b360a139c36a45e99195e1c4474cadb18\"" Sep 16 04:58:39.648568 containerd[1573]: time="2025-09-16T04:58:39.648534486Z" level=info msg="StartContainer for \"24db1c37b691d3c92882e5502497d23b360a139c36a45e99195e1c4474cadb18\"" Sep 16 04:58:39.649547 containerd[1573]: time="2025-09-16T04:58:39.649509743Z" level=info msg="connecting to shim 24db1c37b691d3c92882e5502497d23b360a139c36a45e99195e1c4474cadb18" address="unix:///run/containerd/s/3f054fef67e1a6953fec5eaf57f471d8d950161eb6d34d1ede31963d3a489b2b" protocol=ttrpc version=3 Sep 16 04:58:39.650526 containerd[1573]: time="2025-09-16T04:58:39.650491832Z" level=info msg="CreateContainer within sandbox \"1734efa11e8088c069248252a01e244aa0d914e9e230bbf9c7e630b429a6602c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"20958b1a74200695025f964b997627d9b63d4ccaf6e7996b690cea1af08056c5\"" Sep 16 04:58:39.651196 containerd[1573]: time="2025-09-16T04:58:39.651119454Z" level=info msg="StartContainer for \"20958b1a74200695025f964b997627d9b63d4ccaf6e7996b690cea1af08056c5\"" Sep 16 04:58:39.652497 systemd[1]: Started cri-containerd-bbdda3014e61a6aca094429b1e9c72899767d0a14ad1dbd6d07b24bc2aea2731.scope - libcontainer container bbdda3014e61a6aca094429b1e9c72899767d0a14ad1dbd6d07b24bc2aea2731. Sep 16 04:58:39.652665 containerd[1573]: time="2025-09-16T04:58:39.652560792Z" level=info msg="connecting to shim 20958b1a74200695025f964b997627d9b63d4ccaf6e7996b690cea1af08056c5" address="unix:///run/containerd/s/a48283da385ac25f8f4fd0964269e0ee6bd2373c40614d40ba37ba85b09f43f4" protocol=ttrpc version=3 Sep 16 04:58:39.676456 systemd[1]: Started cri-containerd-24db1c37b691d3c92882e5502497d23b360a139c36a45e99195e1c4474cadb18.scope - libcontainer container 24db1c37b691d3c92882e5502497d23b360a139c36a45e99195e1c4474cadb18. Sep 16 04:58:39.682910 systemd[1]: Started cri-containerd-20958b1a74200695025f964b997627d9b63d4ccaf6e7996b690cea1af08056c5.scope - libcontainer container 20958b1a74200695025f964b997627d9b63d4ccaf6e7996b690cea1af08056c5. Sep 16 04:58:39.684329 kubelet[2348]: E0916 04:58:39.684281 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="1.6s" Sep 16 04:58:39.712177 kubelet[2348]: I0916 04:58:39.711988 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:58:39.712721 kubelet[2348]: E0916 04:58:39.712469 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 16 04:58:39.772964 containerd[1573]: time="2025-09-16T04:58:39.772870302Z" level=info msg="StartContainer for \"bbdda3014e61a6aca094429b1e9c72899767d0a14ad1dbd6d07b24bc2aea2731\" returns successfully" Sep 16 04:58:39.774838 containerd[1573]: time="2025-09-16T04:58:39.774659466Z" level=info msg="StartContainer for \"24db1c37b691d3c92882e5502497d23b360a139c36a45e99195e1c4474cadb18\" returns successfully" Sep 16 04:58:39.782180 containerd[1573]: time="2025-09-16T04:58:39.782128833Z" level=info msg="StartContainer for \"20958b1a74200695025f964b997627d9b63d4ccaf6e7996b690cea1af08056c5\" returns successfully" Sep 16 04:58:40.320809 kubelet[2348]: E0916 04:58:40.320738 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:40.323384 kubelet[2348]: E0916 04:58:40.321037 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:40.323384 kubelet[2348]: E0916 04:58:40.322944 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:40.552224 kubelet[2348]: I0916 04:58:40.552169 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:58:41.324951 kubelet[2348]: E0916 04:58:41.324900 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:41.348226 kubelet[2348]: E0916 04:58:41.347842 2348 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1865aa851dcd022f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:58:38.265393711 +0000 UTC m=+0.417949555,LastTimestamp:2025-09-16 04:58:38.265393711 +0000 UTC m=+0.417949555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:58:41.350215 kubelet[2348]: I0916 04:58:41.349607 2348 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:58:41.350215 kubelet[2348]: E0916 04:58:41.349650 2348 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 16 04:58:41.364507 kubelet[2348]: E0916 04:58:41.364451 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:41.367725 kubelet[2348]: E0916 04:58:41.367679 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:41.465149 kubelet[2348]: E0916 04:58:41.465095 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:42.261160 kubelet[2348]: I0916 04:58:42.261075 2348 apiserver.go:52] "Watching apiserver" Sep 16 04:58:42.276265 kubelet[2348]: I0916 04:58:42.276167 2348 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:58:43.258166 kubelet[2348]: E0916 04:58:43.258092 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:43.327939 kubelet[2348]: E0916 04:58:43.327884 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:44.366721 systemd[1]: Reload requested from client PID 2627 ('systemctl') (unit session-7.scope)... Sep 16 04:58:44.366738 systemd[1]: Reloading... Sep 16 04:58:44.463257 zram_generator::config[2670]: No configuration found. Sep 16 04:58:44.729718 systemd[1]: Reloading finished in 362 ms. Sep 16 04:58:44.761342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:44.773883 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:58:44.774260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:44.774320 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 130.9M memory peak. Sep 16 04:58:44.776482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:45.054586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:45.060911 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:45.138597 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:45.138597 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:45.138597 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:45.139127 kubelet[2715]: I0916 04:58:45.138649 2715 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:45.147525 kubelet[2715]: I0916 04:58:45.147474 2715 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:58:45.147525 kubelet[2715]: I0916 04:58:45.147500 2715 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:45.147754 kubelet[2715]: I0916 04:58:45.147734 2715 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:58:45.149007 kubelet[2715]: I0916 04:58:45.148969 2715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:58:45.150829 kubelet[2715]: I0916 04:58:45.150799 2715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:45.158336 kubelet[2715]: I0916 04:58:45.158300 2715 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:45.163878 kubelet[2715]: I0916 04:58:45.163474 2715 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:45.163878 kubelet[2715]: I0916 04:58:45.163599 2715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:58:45.163878 kubelet[2715]: I0916 04:58:45.163760 2715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:45.164050 kubelet[2715]: I0916 04:58:45.163799 2715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:45.164050 kubelet[2715]: I0916 04:58:45.163992 2715 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:45.164050 kubelet[2715]: I0916 04:58:45.164002 2715 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:58:45.164050 kubelet[2715]: I0916 04:58:45.164034 2715 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:45.164299 kubelet[2715]: I0916 04:58:45.164159 2715 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:58:45.164299 kubelet[2715]: I0916 04:58:45.164173 2715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:45.164299 kubelet[2715]: I0916 04:58:45.164231 2715 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:58:45.164299 kubelet[2715]: I0916 04:58:45.164242 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:45.165512 kubelet[2715]: I0916 04:58:45.165489 2715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:45.166118 kubelet[2715]: I0916 04:58:45.166094 2715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:58:45.166722 kubelet[2715]: I0916 04:58:45.166685 2715 server.go:1274] "Started kubelet" Sep 16 04:58:45.167378 kubelet[2715]: I0916 04:58:45.167341 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:45.169131 kubelet[2715]: I0916 04:58:45.169096 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:45.169131 kubelet[2715]: I0916 04:58:45.169120 2715 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:58:45.171359 kubelet[2715]: I0916 04:58:45.171141 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:45.171474 kubelet[2715]: I0916 04:58:45.171451 2715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:45.173310 kubelet[2715]: I0916 04:58:45.173202 2715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:45.174047 kubelet[2715]: E0916 04:58:45.174014 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:58:45.174581 kubelet[2715]: I0916 04:58:45.173765 2715 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:58:45.174893 kubelet[2715]: I0916 04:58:45.174550 2715 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:58:45.175214 kubelet[2715]: I0916 04:58:45.175168 2715 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:45.178147 kubelet[2715]: I0916 04:58:45.178107 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:45.179919 kubelet[2715]: I0916 04:58:45.179816 2715 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:58:45.179919 kubelet[2715]: I0916 04:58:45.179831 2715 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:58:45.183146 kubelet[2715]: E0916 04:58:45.182987 2715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:45.195341 kubelet[2715]: I0916 04:58:45.195297 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:45.196569 kubelet[2715]: I0916 04:58:45.196548 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:45.196569 kubelet[2715]: I0916 04:58:45.196568 2715 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:58:45.197273 kubelet[2715]: I0916 04:58:45.196626 2715 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:58:45.197273 kubelet[2715]: E0916 04:58:45.196694 2715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:58:45.235259 kubelet[2715]: I0916 04:58:45.235220 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:58:45.235259 kubelet[2715]: I0916 04:58:45.235246 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:45.235259 kubelet[2715]: I0916 04:58:45.235268 2715 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:45.235477 kubelet[2715]: I0916 04:58:45.235419 2715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:58:45.235477 kubelet[2715]: I0916 04:58:45.235430 2715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:58:45.235477 kubelet[2715]: I0916 04:58:45.235449 2715 policy_none.go:49] "None policy: Start" Sep 16 04:58:45.236137 kubelet[2715]: I0916 04:58:45.236116 2715 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:58:45.236208 kubelet[2715]: I0916 04:58:45.236148 2715 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:45.236318 kubelet[2715]: I0916 04:58:45.236299 2715 state_mem.go:75] "Updated machine memory state" Sep 16 04:58:45.240958 kubelet[2715]: I0916 04:58:45.240921 2715 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:58:45.241150 kubelet[2715]: I0916 04:58:45.241127 2715 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:45.241216 kubelet[2715]: I0916 04:58:45.241151 2715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:45.241744 kubelet[2715]: I0916 04:58:45.241706 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:45.280918 sudo[2749]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:58:45.281357 sudo[2749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:58:45.306959 kubelet[2715]: E0916 04:58:45.306659 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:45.346993 kubelet[2715]: I0916 04:58:45.346960 2715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:58:45.354233 kubelet[2715]: I0916 04:58:45.354154 2715 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 16 04:58:45.354594 kubelet[2715]: I0916 04:58:45.354526 2715 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:58:45.376982 kubelet[2715]: I0916 04:58:45.376920 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:45.376982 kubelet[2715]: I0916 04:58:45.376954 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:45.376982 kubelet[2715]: I0916 04:58:45.376974 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:45.376982 kubelet[2715]: I0916 04:58:45.376988 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:45.376982 kubelet[2715]: I0916 04:58:45.377004 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:45.377343 kubelet[2715]: I0916 04:58:45.377021 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:45.377343 kubelet[2715]: I0916 04:58:45.377042 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:58:45.377343 kubelet[2715]: I0916 04:58:45.377057 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/424aaebaf19afe11fac880b43002720b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"424aaebaf19afe11fac880b43002720b\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:45.377343 kubelet[2715]: I0916 04:58:45.377071 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:58:45.607050 kubelet[2715]: E0916 04:58:45.606754 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:45.607050 kubelet[2715]: E0916 04:58:45.607038 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:45.607322 kubelet[2715]: E0916 04:58:45.607102 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:45.669912 sudo[2749]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:46.165528 kubelet[2715]: I0916 04:58:46.165472 2715 apiserver.go:52] "Watching apiserver" Sep 16 04:58:46.175214 kubelet[2715]: I0916 04:58:46.175126 2715 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:58:46.215072 kubelet[2715]: E0916 04:58:46.215032 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:46.216125 kubelet[2715]: E0916 04:58:46.216095 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:46.222217 kubelet[2715]: E0916 04:58:46.221869 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 04:58:46.222217 kubelet[2715]: E0916 04:58:46.222033 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:46.237544 kubelet[2715]: I0916 04:58:46.237449 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.2374281 podStartE2EDuration="1.2374281s" podCreationTimestamp="2025-09-16 04:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:46.237131276 +0000 UTC m=+1.154184088" watchObservedRunningTime="2025-09-16 04:58:46.2374281 +0000 UTC m=+1.154480912" Sep 16 04:58:46.254712 kubelet[2715]: I0916 04:58:46.254644 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.254625663 podStartE2EDuration="3.254625663s" podCreationTimestamp="2025-09-16 04:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:46.245838318 +0000 UTC m=+1.162891140" watchObservedRunningTime="2025-09-16 04:58:46.254625663 +0000 UTC m=+1.171678475" Sep 16 04:58:46.254931 kubelet[2715]: I0916 04:58:46.254779 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.254772331 podStartE2EDuration="1.254772331s" podCreationTimestamp="2025-09-16 04:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:46.254582742 +0000 UTC m=+1.171635564" watchObservedRunningTime="2025-09-16 04:58:46.254772331 +0000 UTC m=+1.171825143" Sep 16 04:58:47.041780 sudo[1771]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:47.043607 sshd[1770]: Connection closed by 10.0.0.1 port 58780 Sep 16 04:58:47.044358 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:47.050578 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:58780.service: Deactivated successfully. Sep 16 04:58:47.053338 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:58:47.053607 systemd[1]: session-7.scope: Consumed 6.304s CPU time, 264M memory peak. Sep 16 04:58:47.055597 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:58:47.057359 systemd-logind[1543]: Removed session 7. Sep 16 04:58:47.218025 kubelet[2715]: E0916 04:58:47.217972 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:48.220304 kubelet[2715]: E0916 04:58:48.220242 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:48.977348 update_engine[1548]: I20250916 04:58:48.977226 1548 update_attempter.cc:509] Updating boot flags... Sep 16 04:58:49.171217 kubelet[2715]: I0916 04:58:49.168524 2715 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:58:49.171217 kubelet[2715]: I0916 04:58:49.169280 2715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:58:49.171379 containerd[1573]: time="2025-09-16T04:58:49.169045052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:58:49.172662 kubelet[2715]: E0916 04:58:49.172628 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:49.222493 kubelet[2715]: E0916 04:58:49.222462 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:49.605607 kubelet[2715]: E0916 04:58:49.605557 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:49.798084 systemd[1]: Created slice kubepods-besteffort-pod82c73cb7_a219_4a94_af73_6599aa02e54b.slice - libcontainer container kubepods-besteffort-pod82c73cb7_a219_4a94_af73_6599aa02e54b.slice. Sep 16 04:58:49.805278 kubelet[2715]: I0916 04:58:49.805215 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-etc-cni-netd\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805278 kubelet[2715]: I0916 04:58:49.805265 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hubble-tls\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805278 kubelet[2715]: I0916 04:58:49.805287 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82c73cb7-a219-4a94-af73-6599aa02e54b-lib-modules\") pod \"kube-proxy-r9lnh\" (UID: \"82c73cb7-a219-4a94-af73-6599aa02e54b\") " pod="kube-system/kube-proxy-r9lnh" Sep 16 04:58:49.805483 kubelet[2715]: I0916 04:58:49.805301 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hostproc\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805483 kubelet[2715]: I0916 04:58:49.805316 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-run\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805483 kubelet[2715]: I0916 04:58:49.805336 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-bpf-maps\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805483 kubelet[2715]: I0916 04:58:49.805349 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-lib-modules\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805483 kubelet[2715]: I0916 04:58:49.805364 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-clustermesh-secrets\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805483 kubelet[2715]: I0916 04:58:49.805381 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82c73cb7-a219-4a94-af73-6599aa02e54b-kube-proxy\") pod \"kube-proxy-r9lnh\" (UID: \"82c73cb7-a219-4a94-af73-6599aa02e54b\") " pod="kube-system/kube-proxy-r9lnh" Sep 16 04:58:49.805640 kubelet[2715]: I0916 04:58:49.805399 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgd66\" (UniqueName: \"kubernetes.io/projected/82c73cb7-a219-4a94-af73-6599aa02e54b-kube-api-access-dgd66\") pod \"kube-proxy-r9lnh\" (UID: \"82c73cb7-a219-4a94-af73-6599aa02e54b\") " pod="kube-system/kube-proxy-r9lnh" Sep 16 04:58:49.805640 kubelet[2715]: I0916 04:58:49.805422 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-kernel\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805640 kubelet[2715]: I0916 04:58:49.805438 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-cgroup\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805640 kubelet[2715]: I0916 04:58:49.805491 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-config-path\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805640 kubelet[2715]: I0916 04:58:49.805506 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-xtables-lock\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805816 kubelet[2715]: I0916 04:58:49.805520 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k52hs\" (UniqueName: \"kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-kube-api-access-k52hs\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805816 kubelet[2715]: I0916 04:58:49.805535 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82c73cb7-a219-4a94-af73-6599aa02e54b-xtables-lock\") pod \"kube-proxy-r9lnh\" (UID: \"82c73cb7-a219-4a94-af73-6599aa02e54b\") " pod="kube-system/kube-proxy-r9lnh" Sep 16 04:58:49.805816 kubelet[2715]: I0916 04:58:49.805550 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cni-path\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.805816 kubelet[2715]: I0916 04:58:49.805571 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-net\") pod \"cilium-jg4dn\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " pod="kube-system/cilium-jg4dn" Sep 16 04:58:49.818159 systemd[1]: Created slice kubepods-burstable-pod80f0e3ab_edd0_4e25_98da_f8ebd78284e6.slice - libcontainer container kubepods-burstable-pod80f0e3ab_edd0_4e25_98da_f8ebd78284e6.slice. Sep 16 04:58:49.912580 kubelet[2715]: E0916 04:58:49.912429 2715 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 04:58:49.912580 kubelet[2715]: E0916 04:58:49.912483 2715 projected.go:194] Error preparing data for projected volume kube-api-access-k52hs for pod kube-system/cilium-jg4dn: configmap "kube-root-ca.crt" not found Sep 16 04:58:49.912580 kubelet[2715]: E0916 04:58:49.912426 2715 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 04:58:49.912765 kubelet[2715]: E0916 04:58:49.912576 2715 projected.go:194] Error preparing data for projected volume kube-api-access-dgd66 for pod kube-system/kube-proxy-r9lnh: configmap "kube-root-ca.crt" not found Sep 16 04:58:49.912765 kubelet[2715]: E0916 04:58:49.912549 2715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-kube-api-access-k52hs podName:80f0e3ab-edd0-4e25-98da-f8ebd78284e6 nodeName:}" failed. No retries permitted until 2025-09-16 04:58:50.412523483 +0000 UTC m=+5.329576295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k52hs" (UniqueName: "kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-kube-api-access-k52hs") pod "cilium-jg4dn" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6") : configmap "kube-root-ca.crt" not found Sep 16 04:58:49.913354 kubelet[2715]: E0916 04:58:49.912677 2715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82c73cb7-a219-4a94-af73-6599aa02e54b-kube-api-access-dgd66 podName:82c73cb7-a219-4a94-af73-6599aa02e54b nodeName:}" failed. No retries permitted until 2025-09-16 04:58:50.412652847 +0000 UTC m=+5.329705659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dgd66" (UniqueName: "kubernetes.io/projected/82c73cb7-a219-4a94-af73-6599aa02e54b-kube-api-access-dgd66") pod "kube-proxy-r9lnh" (UID: "82c73cb7-a219-4a94-af73-6599aa02e54b") : configmap "kube-root-ca.crt" not found Sep 16 04:58:50.154651 systemd[1]: Created slice kubepods-besteffort-pod750dfedb_3c9b_4e41_9960_d84c76990ab1.slice - libcontainer container kubepods-besteffort-pod750dfedb_3c9b_4e41_9960_d84c76990ab1.slice. Sep 16 04:58:50.209661 kubelet[2715]: I0916 04:58:50.209495 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c562d\" (UniqueName: \"kubernetes.io/projected/750dfedb-3c9b-4e41-9960-d84c76990ab1-kube-api-access-c562d\") pod \"cilium-operator-5d85765b45-zg877\" (UID: \"750dfedb-3c9b-4e41-9960-d84c76990ab1\") " pod="kube-system/cilium-operator-5d85765b45-zg877" Sep 16 04:58:50.209661 kubelet[2715]: I0916 04:58:50.209569 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750dfedb-3c9b-4e41-9960-d84c76990ab1-cilium-config-path\") pod \"cilium-operator-5d85765b45-zg877\" (UID: \"750dfedb-3c9b-4e41-9960-d84c76990ab1\") " pod="kube-system/cilium-operator-5d85765b45-zg877" Sep 16 04:58:50.224199 kubelet[2715]: E0916 04:58:50.224126 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:50.459823 kubelet[2715]: E0916 04:58:50.459654 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:50.460579 containerd[1573]: time="2025-09-16T04:58:50.460516637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zg877,Uid:750dfedb-3c9b-4e41-9960-d84c76990ab1,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:50.594139 containerd[1573]: time="2025-09-16T04:58:50.594071347Z" level=info msg="connecting to shim b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3" address="unix:///run/containerd/s/18e3d94cb2b30f62f1a57926d87b5642145d9f6e3e74477a83354ba2e001ae2e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:50.635561 systemd[1]: Started cri-containerd-b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3.scope - libcontainer container b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3. Sep 16 04:58:50.683417 containerd[1573]: time="2025-09-16T04:58:50.683363104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zg877,Uid:750dfedb-3c9b-4e41-9960-d84c76990ab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\"" Sep 16 04:58:50.684442 kubelet[2715]: E0916 04:58:50.684413 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:50.686080 containerd[1573]: time="2025-09-16T04:58:50.686045544Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:58:50.710634 kubelet[2715]: E0916 04:58:50.710450 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:50.711604 containerd[1573]: time="2025-09-16T04:58:50.711147352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9lnh,Uid:82c73cb7-a219-4a94-af73-6599aa02e54b,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:50.722413 kubelet[2715]: E0916 04:58:50.722348 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:50.722901 containerd[1573]: time="2025-09-16T04:58:50.722850631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg4dn,Uid:80f0e3ab-edd0-4e25-98da-f8ebd78284e6,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:50.741450 containerd[1573]: time="2025-09-16T04:58:50.741383867Z" level=info msg="connecting to shim 695fce99b9b00f19ab8cb62c1f96474e87cd2a301076408371abf4556b6f253b" address="unix:///run/containerd/s/5212d629d4cf499e09fbd88cd87e62c00f7ea0fd910e3ea4ce146f1b82ffafa9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:50.752293 containerd[1573]: time="2025-09-16T04:58:50.752163114Z" level=info msg="connecting to shim a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35" address="unix:///run/containerd/s/098c3213bb290c2d4f009ac073583f2e561775d3a83347e0dd2da249b094c209" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:50.781432 systemd[1]: Started cri-containerd-a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35.scope - libcontainer container a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35. Sep 16 04:58:50.787368 systemd[1]: Started cri-containerd-695fce99b9b00f19ab8cb62c1f96474e87cd2a301076408371abf4556b6f253b.scope - libcontainer container 695fce99b9b00f19ab8cb62c1f96474e87cd2a301076408371abf4556b6f253b. Sep 16 04:58:50.821523 containerd[1573]: time="2025-09-16T04:58:50.821465227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg4dn,Uid:80f0e3ab-edd0-4e25-98da-f8ebd78284e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\"" Sep 16 04:58:50.822532 kubelet[2715]: E0916 04:58:50.822497 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:50.828015 containerd[1573]: time="2025-09-16T04:58:50.827973385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9lnh,Uid:82c73cb7-a219-4a94-af73-6599aa02e54b,Namespace:kube-system,Attempt:0,} returns sandbox id \"695fce99b9b00f19ab8cb62c1f96474e87cd2a301076408371abf4556b6f253b\"" Sep 16 04:58:50.829054 kubelet[2715]: E0916 04:58:50.829014 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:50.831588 containerd[1573]: time="2025-09-16T04:58:50.831526194Z" level=info msg="CreateContainer within sandbox \"695fce99b9b00f19ab8cb62c1f96474e87cd2a301076408371abf4556b6f253b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:58:50.848319 containerd[1573]: time="2025-09-16T04:58:50.848254461Z" level=info msg="Container c9f59cb5dc5ab291fe523cd609c1418c604f1b081ff370a949e92b3260d835a9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:50.857479 containerd[1573]: time="2025-09-16T04:58:50.857420653Z" level=info msg="CreateContainer within sandbox \"695fce99b9b00f19ab8cb62c1f96474e87cd2a301076408371abf4556b6f253b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9f59cb5dc5ab291fe523cd609c1418c604f1b081ff370a949e92b3260d835a9\"" Sep 16 04:58:50.858434 containerd[1573]: time="2025-09-16T04:58:50.858345415Z" level=info msg="StartContainer for \"c9f59cb5dc5ab291fe523cd609c1418c604f1b081ff370a949e92b3260d835a9\"" Sep 16 04:58:50.860824 containerd[1573]: time="2025-09-16T04:58:50.860767844Z" level=info msg="connecting to shim c9f59cb5dc5ab291fe523cd609c1418c604f1b081ff370a949e92b3260d835a9" address="unix:///run/containerd/s/5212d629d4cf499e09fbd88cd87e62c00f7ea0fd910e3ea4ce146f1b82ffafa9" protocol=ttrpc version=3 Sep 16 04:58:50.888487 systemd[1]: Started cri-containerd-c9f59cb5dc5ab291fe523cd609c1418c604f1b081ff370a949e92b3260d835a9.scope - libcontainer container c9f59cb5dc5ab291fe523cd609c1418c604f1b081ff370a949e92b3260d835a9. Sep 16 04:58:50.940402 containerd[1573]: time="2025-09-16T04:58:50.940342465Z" level=info msg="StartContainer for \"c9f59cb5dc5ab291fe523cd609c1418c604f1b081ff370a949e92b3260d835a9\" returns successfully" Sep 16 04:58:51.228223 kubelet[2715]: E0916 04:58:51.228047 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:51.239495 kubelet[2715]: I0916 04:58:51.239399 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r9lnh" podStartSLOduration=2.239379476 podStartE2EDuration="2.239379476s" podCreationTimestamp="2025-09-16 04:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:51.237911737 +0000 UTC m=+6.154964559" watchObservedRunningTime="2025-09-16 04:58:51.239379476 +0000 UTC m=+6.156432288" Sep 16 04:58:51.759699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059884046.mount: Deactivated successfully. Sep 16 04:58:53.637571 containerd[1573]: time="2025-09-16T04:58:53.637504720Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:53.638268 containerd[1573]: time="2025-09-16T04:58:53.638235301Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 16 04:58:53.639474 containerd[1573]: time="2025-09-16T04:58:53.639446230Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:53.640679 containerd[1573]: time="2025-09-16T04:58:53.640632904Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.954546111s" Sep 16 04:58:53.640679 containerd[1573]: time="2025-09-16T04:58:53.640669383Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 16 04:58:53.642257 containerd[1573]: time="2025-09-16T04:58:53.641794341Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:58:53.643820 containerd[1573]: time="2025-09-16T04:58:53.643170873Z" level=info msg="CreateContainer within sandbox \"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:58:53.688698 containerd[1573]: time="2025-09-16T04:58:53.688629267Z" level=info msg="Container 36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:53.695921 containerd[1573]: time="2025-09-16T04:58:53.695868906Z" level=info msg="CreateContainer within sandbox \"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\"" Sep 16 04:58:53.696485 containerd[1573]: time="2025-09-16T04:58:53.696444053Z" level=info msg="StartContainer for \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\"" Sep 16 04:58:53.697436 containerd[1573]: time="2025-09-16T04:58:53.697406152Z" level=info msg="connecting to shim 36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c" address="unix:///run/containerd/s/18e3d94cb2b30f62f1a57926d87b5642145d9f6e3e74477a83354ba2e001ae2e" protocol=ttrpc version=3 Sep 16 04:58:53.756427 systemd[1]: Started cri-containerd-36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c.scope - libcontainer container 36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c. Sep 16 04:58:53.872257 containerd[1573]: time="2025-09-16T04:58:53.872205217Z" level=info msg="StartContainer for \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" returns successfully" Sep 16 04:58:54.239883 kubelet[2715]: E0916 04:58:54.239830 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:54.249216 kubelet[2715]: I0916 04:58:54.249005 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-zg877" podStartSLOduration=1.292885825 podStartE2EDuration="4.248940781s" podCreationTimestamp="2025-09-16 04:58:50 +0000 UTC" firstStartedPulling="2025-09-16 04:58:50.685563031 +0000 UTC m=+5.602615843" lastFinishedPulling="2025-09-16 04:58:53.641617977 +0000 UTC m=+8.558670799" observedRunningTime="2025-09-16 04:58:54.248557378 +0000 UTC m=+9.165610200" watchObservedRunningTime="2025-09-16 04:58:54.248940781 +0000 UTC m=+9.165993594" Sep 16 04:58:55.242651 kubelet[2715]: E0916 04:58:55.242596 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:58:57.487154 kubelet[2715]: E0916 04:58:57.487093 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:06.617281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515824740.mount: Deactivated successfully. Sep 16 04:59:09.307623 containerd[1573]: time="2025-09-16T04:59:09.307536246Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:09.308271 containerd[1573]: time="2025-09-16T04:59:09.308229380Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 16 04:59:09.309653 containerd[1573]: time="2025-09-16T04:59:09.309609756Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:09.311250 containerd[1573]: time="2025-09-16T04:59:09.311218982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.669384947s" Sep 16 04:59:09.311308 containerd[1573]: time="2025-09-16T04:59:09.311256913Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 16 04:59:09.313316 containerd[1573]: time="2025-09-16T04:59:09.313284727Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:59:09.320500 containerd[1573]: time="2025-09-16T04:59:09.320446656Z" level=info msg="Container 388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:09.324622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211633725.mount: Deactivated successfully. Sep 16 04:59:09.330689 containerd[1573]: time="2025-09-16T04:59:09.330632943Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\"" Sep 16 04:59:09.331406 containerd[1573]: time="2025-09-16T04:59:09.331349080Z" level=info msg="StartContainer for \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\"" Sep 16 04:59:09.332263 containerd[1573]: time="2025-09-16T04:59:09.332238252Z" level=info msg="connecting to shim 388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690" address="unix:///run/containerd/s/098c3213bb290c2d4f009ac073583f2e561775d3a83347e0dd2da249b094c209" protocol=ttrpc version=3 Sep 16 04:59:09.354345 systemd[1]: Started cri-containerd-388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690.scope - libcontainer container 388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690. Sep 16 04:59:09.394271 containerd[1573]: time="2025-09-16T04:59:09.394222191Z" level=info msg="StartContainer for \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" returns successfully" Sep 16 04:59:09.409579 systemd[1]: cri-containerd-388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690.scope: Deactivated successfully. Sep 16 04:59:09.412613 containerd[1573]: time="2025-09-16T04:59:09.412552864Z" level=info msg="received exit event container_id:\"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" id:\"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" pid:3203 exited_at:{seconds:1757998749 nanos:412065898}" Sep 16 04:59:09.412763 containerd[1573]: time="2025-09-16T04:59:09.412632635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" id:\"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" pid:3203 exited_at:{seconds:1757998749 nanos:412065898}" Sep 16 04:59:09.437496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690-rootfs.mount: Deactivated successfully. Sep 16 04:59:10.269369 kubelet[2715]: E0916 04:59:10.269295 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:11.273878 kubelet[2715]: E0916 04:59:11.273791 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:11.276036 containerd[1573]: time="2025-09-16T04:59:11.275969177Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:59:11.287270 containerd[1573]: time="2025-09-16T04:59:11.287214177Z" level=info msg="Container 878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:11.292429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660486880.mount: Deactivated successfully. Sep 16 04:59:11.298355 containerd[1573]: time="2025-09-16T04:59:11.298288265Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\"" Sep 16 04:59:11.305256 containerd[1573]: time="2025-09-16T04:59:11.305175533Z" level=info msg="StartContainer for \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\"" Sep 16 04:59:11.306199 containerd[1573]: time="2025-09-16T04:59:11.306150906Z" level=info msg="connecting to shim 878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df" address="unix:///run/containerd/s/098c3213bb290c2d4f009ac073583f2e561775d3a83347e0dd2da249b094c209" protocol=ttrpc version=3 Sep 16 04:59:11.343541 systemd[1]: Started cri-containerd-878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df.scope - libcontainer container 878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df. Sep 16 04:59:11.382514 containerd[1573]: time="2025-09-16T04:59:11.382179898Z" level=info msg="StartContainer for \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" returns successfully" Sep 16 04:59:11.400070 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:59:11.400492 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:59:11.400860 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:59:11.404026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:59:11.406130 containerd[1573]: time="2025-09-16T04:59:11.406066453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" id:\"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" pid:3248 exited_at:{seconds:1757998751 nanos:405654057}" Sep 16 04:59:11.406611 containerd[1573]: time="2025-09-16T04:59:11.406583455Z" level=info msg="received exit event container_id:\"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" id:\"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" pid:3248 exited_at:{seconds:1757998751 nanos:405654057}" Sep 16 04:59:11.407383 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:59:11.408568 systemd[1]: cri-containerd-878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df.scope: Deactivated successfully. Sep 16 04:59:11.446897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:59:12.278439 kubelet[2715]: E0916 04:59:12.278345 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:12.280343 containerd[1573]: time="2025-09-16T04:59:12.280284424Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:59:12.288628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df-rootfs.mount: Deactivated successfully. Sep 16 04:59:12.713032 containerd[1573]: time="2025-09-16T04:59:12.712842609Z" level=info msg="Container 07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:13.085969 containerd[1573]: time="2025-09-16T04:59:13.085907907Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\"" Sep 16 04:59:13.086801 containerd[1573]: time="2025-09-16T04:59:13.086626888Z" level=info msg="StartContainer for \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\"" Sep 16 04:59:13.088292 containerd[1573]: time="2025-09-16T04:59:13.088263113Z" level=info msg="connecting to shim 07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579" address="unix:///run/containerd/s/098c3213bb290c2d4f009ac073583f2e561775d3a83347e0dd2da249b094c209" protocol=ttrpc version=3 Sep 16 04:59:13.132567 systemd[1]: Started cri-containerd-07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579.scope - libcontainer container 07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579. Sep 16 04:59:13.190502 systemd[1]: cri-containerd-07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579.scope: Deactivated successfully. Sep 16 04:59:13.192487 containerd[1573]: time="2025-09-16T04:59:13.192421944Z" level=info msg="received exit event container_id:\"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" id:\"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" pid:3296 exited_at:{seconds:1757998753 nanos:192166083}" Sep 16 04:59:13.192598 containerd[1573]: time="2025-09-16T04:59:13.192520850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" id:\"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" pid:3296 exited_at:{seconds:1757998753 nanos:192166083}" Sep 16 04:59:13.192933 containerd[1573]: time="2025-09-16T04:59:13.192874615Z" level=info msg="StartContainer for \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" returns successfully" Sep 16 04:59:13.224863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579-rootfs.mount: Deactivated successfully. Sep 16 04:59:13.284622 kubelet[2715]: E0916 04:59:13.284575 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:13.286996 containerd[1573]: time="2025-09-16T04:59:13.286943563Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:59:13.299685 containerd[1573]: time="2025-09-16T04:59:13.299620679Z" level=info msg="Container 6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:13.311988 containerd[1573]: time="2025-09-16T04:59:13.311907770Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\"" Sep 16 04:59:13.314462 containerd[1573]: time="2025-09-16T04:59:13.314389554Z" level=info msg="StartContainer for \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\"" Sep 16 04:59:13.319234 containerd[1573]: time="2025-09-16T04:59:13.317653769Z" level=info msg="connecting to shim 6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631" address="unix:///run/containerd/s/098c3213bb290c2d4f009ac073583f2e561775d3a83347e0dd2da249b094c209" protocol=ttrpc version=3 Sep 16 04:59:13.346419 systemd[1]: Started cri-containerd-6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631.scope - libcontainer container 6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631. Sep 16 04:59:13.385455 systemd[1]: cri-containerd-6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631.scope: Deactivated successfully. Sep 16 04:59:13.386813 containerd[1573]: time="2025-09-16T04:59:13.386708016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" id:\"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" pid:3335 exited_at:{seconds:1757998753 nanos:386090646}" Sep 16 04:59:13.388423 containerd[1573]: time="2025-09-16T04:59:13.388340765Z" level=info msg="received exit event container_id:\"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" id:\"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" pid:3335 exited_at:{seconds:1757998753 nanos:386090646}" Sep 16 04:59:13.399069 containerd[1573]: time="2025-09-16T04:59:13.399005367Z" level=info msg="StartContainer for \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" returns successfully" Sep 16 04:59:13.416441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631-rootfs.mount: Deactivated successfully. Sep 16 04:59:14.289065 kubelet[2715]: E0916 04:59:14.289026 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:14.290934 containerd[1573]: time="2025-09-16T04:59:14.290886057Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:59:14.307574 containerd[1573]: time="2025-09-16T04:59:14.307497952Z" level=info msg="Container 7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:14.316258 containerd[1573]: time="2025-09-16T04:59:14.316172974Z" level=info msg="CreateContainer within sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\"" Sep 16 04:59:14.317013 containerd[1573]: time="2025-09-16T04:59:14.316985590Z" level=info msg="StartContainer for \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\"" Sep 16 04:59:14.318012 containerd[1573]: time="2025-09-16T04:59:14.317986421Z" level=info msg="connecting to shim 7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8" address="unix:///run/containerd/s/098c3213bb290c2d4f009ac073583f2e561775d3a83347e0dd2da249b094c209" protocol=ttrpc version=3 Sep 16 04:59:14.349546 systemd[1]: Started cri-containerd-7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8.scope - libcontainer container 7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8. Sep 16 04:59:14.401517 containerd[1573]: time="2025-09-16T04:59:14.401473360Z" level=info msg="StartContainer for \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" returns successfully" Sep 16 04:59:14.496618 containerd[1573]: time="2025-09-16T04:59:14.496554144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" id:\"8ea1add458d5a3c4c798075f5e58a0240e6004eb1d814898636489875ab131a5\" pid:3403 exited_at:{seconds:1757998754 nanos:495994162}" Sep 16 04:59:14.575597 kubelet[2715]: I0916 04:59:14.575401 2715 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 16 04:59:14.620535 systemd[1]: Created slice kubepods-burstable-pode5b10f6b_9a67_4d6a_9dd6_9b779e98dfae.slice - libcontainer container kubepods-burstable-pode5b10f6b_9a67_4d6a_9dd6_9b779e98dfae.slice. Sep 16 04:59:14.631728 systemd[1]: Created slice kubepods-burstable-pod6690d005_fd79_4dfe_9f3c_544ffd953cd0.slice - libcontainer container kubepods-burstable-pod6690d005_fd79_4dfe_9f3c_544ffd953cd0.slice. Sep 16 04:59:14.674508 kubelet[2715]: I0916 04:59:14.674434 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6690d005-fd79-4dfe-9f3c-544ffd953cd0-config-volume\") pod \"coredns-7c65d6cfc9-5twc6\" (UID: \"6690d005-fd79-4dfe-9f3c-544ffd953cd0\") " pod="kube-system/coredns-7c65d6cfc9-5twc6" Sep 16 04:59:14.674508 kubelet[2715]: I0916 04:59:14.674492 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxv4\" (UniqueName: \"kubernetes.io/projected/6690d005-fd79-4dfe-9f3c-544ffd953cd0-kube-api-access-fvxv4\") pod \"coredns-7c65d6cfc9-5twc6\" (UID: \"6690d005-fd79-4dfe-9f3c-544ffd953cd0\") " pod="kube-system/coredns-7c65d6cfc9-5twc6" Sep 16 04:59:14.674508 kubelet[2715]: I0916 04:59:14.674519 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcrm\" (UniqueName: \"kubernetes.io/projected/e5b10f6b-9a67-4d6a-9dd6-9b779e98dfae-kube-api-access-vbcrm\") pod \"coredns-7c65d6cfc9-c6sm8\" (UID: \"e5b10f6b-9a67-4d6a-9dd6-9b779e98dfae\") " pod="kube-system/coredns-7c65d6cfc9-c6sm8" Sep 16 04:59:14.674776 kubelet[2715]: I0916 04:59:14.674536 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5b10f6b-9a67-4d6a-9dd6-9b779e98dfae-config-volume\") pod \"coredns-7c65d6cfc9-c6sm8\" (UID: \"e5b10f6b-9a67-4d6a-9dd6-9b779e98dfae\") " pod="kube-system/coredns-7c65d6cfc9-c6sm8" Sep 16 04:59:14.928939 kubelet[2715]: E0916 04:59:14.928808 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:14.930666 containerd[1573]: time="2025-09-16T04:59:14.930617835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6sm8,Uid:e5b10f6b-9a67-4d6a-9dd6-9b779e98dfae,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:14.936007 kubelet[2715]: E0916 04:59:14.935968 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:14.936470 containerd[1573]: time="2025-09-16T04:59:14.936410850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5twc6,Uid:6690d005-fd79-4dfe-9f3c-544ffd953cd0,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:15.294823 kubelet[2715]: E0916 04:59:15.294779 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:15.692136 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:55720.service - OpenSSH per-connection server daemon (10.0.0.1:55720). Sep 16 04:59:15.791546 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 55720 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:15.793982 sshd-session[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:15.799535 systemd-logind[1543]: New session 8 of user core. Sep 16 04:59:15.808311 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:59:15.955489 sshd[3497]: Connection closed by 10.0.0.1 port 55720 Sep 16 04:59:15.955740 sshd-session[3494]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:15.960421 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:55720.service: Deactivated successfully. Sep 16 04:59:15.962620 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:59:15.963405 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:59:15.964837 systemd-logind[1543]: Removed session 8. Sep 16 04:59:16.296781 kubelet[2715]: E0916 04:59:16.296740 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:16.985728 systemd-networkd[1473]: cilium_host: Link UP Sep 16 04:59:16.985906 systemd-networkd[1473]: cilium_net: Link UP Sep 16 04:59:16.987249 systemd-networkd[1473]: cilium_net: Gained carrier Sep 16 04:59:16.987456 systemd-networkd[1473]: cilium_host: Gained carrier Sep 16 04:59:17.070418 systemd-networkd[1473]: cilium_net: Gained IPv6LL Sep 16 04:59:17.107942 systemd-networkd[1473]: cilium_vxlan: Link UP Sep 16 04:59:17.107954 systemd-networkd[1473]: cilium_vxlan: Gained carrier Sep 16 04:59:17.298973 kubelet[2715]: E0916 04:59:17.298835 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:17.351439 kernel: NET: Registered PF_ALG protocol family Sep 16 04:59:17.829444 systemd-networkd[1473]: cilium_host: Gained IPv6LL Sep 16 04:59:18.141976 systemd-networkd[1473]: lxc_health: Link UP Sep 16 04:59:18.142344 systemd-networkd[1473]: lxc_health: Gained carrier Sep 16 04:59:18.405408 systemd-networkd[1473]: cilium_vxlan: Gained IPv6LL Sep 16 04:59:18.722533 kernel: eth0: renamed from tmpcda52 Sep 16 04:59:18.721929 systemd-networkd[1473]: lxc936ac8f2ef3a: Link UP Sep 16 04:59:18.722282 systemd-networkd[1473]: lxc936ac8f2ef3a: Gained carrier Sep 16 04:59:18.725082 kubelet[2715]: E0916 04:59:18.725038 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:18.748750 systemd-networkd[1473]: lxc01d00857e2af: Link UP Sep 16 04:59:18.755696 kubelet[2715]: I0916 04:59:18.755387 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jg4dn" podStartSLOduration=11.266601316 podStartE2EDuration="29.755363862s" podCreationTimestamp="2025-09-16 04:58:49 +0000 UTC" firstStartedPulling="2025-09-16 04:58:50.823280806 +0000 UTC m=+5.740333619" lastFinishedPulling="2025-09-16 04:59:09.312043353 +0000 UTC m=+24.229096165" observedRunningTime="2025-09-16 04:59:15.310256127 +0000 UTC m=+30.227308959" watchObservedRunningTime="2025-09-16 04:59:18.755363862 +0000 UTC m=+33.672416674" Sep 16 04:59:18.757284 kernel: eth0: renamed from tmp122ea Sep 16 04:59:18.759112 systemd-networkd[1473]: lxc01d00857e2af: Gained carrier Sep 16 04:59:19.302096 kubelet[2715]: E0916 04:59:19.302062 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:19.877533 systemd-networkd[1473]: lxc936ac8f2ef3a: Gained IPv6LL Sep 16 04:59:19.941480 systemd-networkd[1473]: lxc_health: Gained IPv6LL Sep 16 04:59:20.303894 kubelet[2715]: E0916 04:59:20.303857 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:20.517433 systemd-networkd[1473]: lxc01d00857e2af: Gained IPv6LL Sep 16 04:59:20.969813 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:41314.service - OpenSSH per-connection server daemon (10.0.0.1:41314). Sep 16 04:59:21.034310 sshd[3886]: Accepted publickey for core from 10.0.0.1 port 41314 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:21.035793 sshd-session[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:21.039975 systemd-logind[1543]: New session 9 of user core. Sep 16 04:59:21.047326 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:59:21.180162 sshd[3890]: Connection closed by 10.0.0.1 port 41314 Sep 16 04:59:21.180533 sshd-session[3886]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:21.185426 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:41314.service: Deactivated successfully. Sep 16 04:59:21.187461 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:59:21.188429 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:59:21.189794 systemd-logind[1543]: Removed session 9. Sep 16 04:59:22.647134 containerd[1573]: time="2025-09-16T04:59:22.647066393Z" level=info msg="connecting to shim 122ea80d1666947a2d97b465732de338ca9068c46881f0794223e2dd8a44cc64" address="unix:///run/containerd/s/aa7a6f15a8fafaf72a81a70a40a894d8cf3e21bb14cbea7252ee4a754eb61216" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:22.647621 containerd[1573]: time="2025-09-16T04:59:22.647076011Z" level=info msg="connecting to shim cda52960aa2a2479095d57c92377741efc221c9cb7a9c1979a1a86abfcb5a1a9" address="unix:///run/containerd/s/dcac8559fb655fbb63f16f3b0dc7b8275839eb5d22369b811dfceab70781442a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:22.705523 systemd[1]: Started cri-containerd-122ea80d1666947a2d97b465732de338ca9068c46881f0794223e2dd8a44cc64.scope - libcontainer container 122ea80d1666947a2d97b465732de338ca9068c46881f0794223e2dd8a44cc64. Sep 16 04:59:22.710627 systemd[1]: Started cri-containerd-cda52960aa2a2479095d57c92377741efc221c9cb7a9c1979a1a86abfcb5a1a9.scope - libcontainer container cda52960aa2a2479095d57c92377741efc221c9cb7a9c1979a1a86abfcb5a1a9. Sep 16 04:59:22.725831 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:59:22.727953 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:59:22.770622 containerd[1573]: time="2025-09-16T04:59:22.770562059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6sm8,Uid:e5b10f6b-9a67-4d6a-9dd6-9b779e98dfae,Namespace:kube-system,Attempt:0,} returns sandbox id \"cda52960aa2a2479095d57c92377741efc221c9cb7a9c1979a1a86abfcb5a1a9\"" Sep 16 04:59:22.775347 containerd[1573]: time="2025-09-16T04:59:22.775297919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5twc6,Uid:6690d005-fd79-4dfe-9f3c-544ffd953cd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"122ea80d1666947a2d97b465732de338ca9068c46881f0794223e2dd8a44cc64\"" Sep 16 04:59:22.776297 kubelet[2715]: E0916 04:59:22.776270 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:22.776741 kubelet[2715]: E0916 04:59:22.776270 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:22.778569 containerd[1573]: time="2025-09-16T04:59:22.778483330Z" level=info msg="CreateContainer within sandbox \"122ea80d1666947a2d97b465732de338ca9068c46881f0794223e2dd8a44cc64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:22.778744 containerd[1573]: time="2025-09-16T04:59:22.778485955Z" level=info msg="CreateContainer within sandbox \"cda52960aa2a2479095d57c92377741efc221c9cb7a9c1979a1a86abfcb5a1a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:22.802146 containerd[1573]: time="2025-09-16T04:59:22.802084850Z" level=info msg="Container 39a98516cfa909f2a681aec9da133a889a1402412b6529abdedbeba1ff4402da: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:22.812435 containerd[1573]: time="2025-09-16T04:59:22.812375349Z" level=info msg="CreateContainer within sandbox \"cda52960aa2a2479095d57c92377741efc221c9cb7a9c1979a1a86abfcb5a1a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39a98516cfa909f2a681aec9da133a889a1402412b6529abdedbeba1ff4402da\"" Sep 16 04:59:22.813249 containerd[1573]: time="2025-09-16T04:59:22.813073721Z" level=info msg="StartContainer for \"39a98516cfa909f2a681aec9da133a889a1402412b6529abdedbeba1ff4402da\"" Sep 16 04:59:22.814322 containerd[1573]: time="2025-09-16T04:59:22.814295064Z" level=info msg="connecting to shim 39a98516cfa909f2a681aec9da133a889a1402412b6529abdedbeba1ff4402da" address="unix:///run/containerd/s/dcac8559fb655fbb63f16f3b0dc7b8275839eb5d22369b811dfceab70781442a" protocol=ttrpc version=3 Sep 16 04:59:22.824469 containerd[1573]: time="2025-09-16T04:59:22.824415324Z" level=info msg="Container 703de214ea8dd36cf9e7bb03e90b64a3881e0679564b24b29496a81f2be3000b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:22.835663 containerd[1573]: time="2025-09-16T04:59:22.835614079Z" level=info msg="CreateContainer within sandbox \"122ea80d1666947a2d97b465732de338ca9068c46881f0794223e2dd8a44cc64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"703de214ea8dd36cf9e7bb03e90b64a3881e0679564b24b29496a81f2be3000b\"" Sep 16 04:59:22.836469 containerd[1573]: time="2025-09-16T04:59:22.836418449Z" level=info msg="StartContainer for \"703de214ea8dd36cf9e7bb03e90b64a3881e0679564b24b29496a81f2be3000b\"" Sep 16 04:59:22.837821 containerd[1573]: time="2025-09-16T04:59:22.837773372Z" level=info msg="connecting to shim 703de214ea8dd36cf9e7bb03e90b64a3881e0679564b24b29496a81f2be3000b" address="unix:///run/containerd/s/aa7a6f15a8fafaf72a81a70a40a894d8cf3e21bb14cbea7252ee4a754eb61216" protocol=ttrpc version=3 Sep 16 04:59:22.839554 systemd[1]: Started cri-containerd-39a98516cfa909f2a681aec9da133a889a1402412b6529abdedbeba1ff4402da.scope - libcontainer container 39a98516cfa909f2a681aec9da133a889a1402412b6529abdedbeba1ff4402da. Sep 16 04:59:22.866673 systemd[1]: Started cri-containerd-703de214ea8dd36cf9e7bb03e90b64a3881e0679564b24b29496a81f2be3000b.scope - libcontainer container 703de214ea8dd36cf9e7bb03e90b64a3881e0679564b24b29496a81f2be3000b. Sep 16 04:59:22.887764 containerd[1573]: time="2025-09-16T04:59:22.887706048Z" level=info msg="StartContainer for \"39a98516cfa909f2a681aec9da133a889a1402412b6529abdedbeba1ff4402da\" returns successfully" Sep 16 04:59:22.911284 containerd[1573]: time="2025-09-16T04:59:22.911123232Z" level=info msg="StartContainer for \"703de214ea8dd36cf9e7bb03e90b64a3881e0679564b24b29496a81f2be3000b\" returns successfully" Sep 16 04:59:23.315104 kubelet[2715]: E0916 04:59:23.314685 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:23.334364 kubelet[2715]: E0916 04:59:23.334267 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:23.406877 kubelet[2715]: I0916 04:59:23.406815 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5twc6" podStartSLOduration=33.406792729 podStartE2EDuration="33.406792729s" podCreationTimestamp="2025-09-16 04:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:23.393626212 +0000 UTC m=+38.310679044" watchObservedRunningTime="2025-09-16 04:59:23.406792729 +0000 UTC m=+38.323845541" Sep 16 04:59:23.420020 kubelet[2715]: I0916 04:59:23.419942 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c6sm8" podStartSLOduration=33.419905284 podStartE2EDuration="33.419905284s" podCreationTimestamp="2025-09-16 04:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:23.419457113 +0000 UTC m=+38.336509925" watchObservedRunningTime="2025-09-16 04:59:23.419905284 +0000 UTC m=+38.336958096" Sep 16 04:59:24.335719 kubelet[2715]: E0916 04:59:24.335680 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:24.335719 kubelet[2715]: E0916 04:59:24.335720 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:25.359755 kubelet[2715]: E0916 04:59:25.359711 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:25.360269 kubelet[2715]: E0916 04:59:25.359932 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:26.203571 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:41322.service - OpenSSH per-connection server daemon (10.0.0.1:41322). Sep 16 04:59:26.269980 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 41322 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:26.272384 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:26.278305 systemd-logind[1543]: New session 10 of user core. Sep 16 04:59:26.290444 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:59:26.422149 sshd[4091]: Connection closed by 10.0.0.1 port 41322 Sep 16 04:59:26.422615 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:26.427356 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:41322.service: Deactivated successfully. Sep 16 04:59:26.429406 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:59:26.430232 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:59:26.431497 systemd-logind[1543]: Removed session 10. Sep 16 04:59:31.438474 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:46004.service - OpenSSH per-connection server daemon (10.0.0.1:46004). Sep 16 04:59:31.497138 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 46004 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:31.498702 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:31.503336 systemd-logind[1543]: New session 11 of user core. Sep 16 04:59:31.521327 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:59:31.633637 sshd[4110]: Connection closed by 10.0.0.1 port 46004 Sep 16 04:59:31.633962 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:31.638037 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:46004.service: Deactivated successfully. Sep 16 04:59:31.640076 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:59:31.640862 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:59:31.642101 systemd-logind[1543]: Removed session 11. Sep 16 04:59:36.657772 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). Sep 16 04:59:36.726517 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:36.728296 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:36.733245 systemd-logind[1543]: New session 12 of user core. Sep 16 04:59:36.743383 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:59:37.040389 sshd[4127]: Connection closed by 10.0.0.1 port 46016 Sep 16 04:59:37.041133 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:37.081589 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:46016.service: Deactivated successfully. Sep 16 04:59:37.088404 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:59:37.090474 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:59:37.102451 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:46026.service - OpenSSH per-connection server daemon (10.0.0.1:46026). Sep 16 04:59:37.104647 systemd-logind[1543]: Removed session 12. Sep 16 04:59:37.184490 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 46026 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:37.186428 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:37.193069 systemd-logind[1543]: New session 13 of user core. Sep 16 04:59:37.204405 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:59:37.378387 sshd[4144]: Connection closed by 10.0.0.1 port 46026 Sep 16 04:59:37.378949 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:37.401529 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:46026.service: Deactivated successfully. Sep 16 04:59:37.404593 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:59:37.411333 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:59:37.412462 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:46034.service - OpenSSH per-connection server daemon (10.0.0.1:46034). Sep 16 04:59:37.414776 systemd-logind[1543]: Removed session 13. Sep 16 04:59:37.483265 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 46034 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:37.485138 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:37.490689 systemd-logind[1543]: New session 14 of user core. Sep 16 04:59:37.500542 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:59:37.627477 sshd[4159]: Connection closed by 10.0.0.1 port 46034 Sep 16 04:59:37.627890 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:37.631631 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:46034.service: Deactivated successfully. Sep 16 04:59:37.634365 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:59:37.636224 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:59:37.637796 systemd-logind[1543]: Removed session 14. Sep 16 04:59:42.641364 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:37956.service - OpenSSH per-connection server daemon (10.0.0.1:37956). Sep 16 04:59:42.715631 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 37956 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:42.717536 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:42.722937 systemd-logind[1543]: New session 15 of user core. Sep 16 04:59:42.736440 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:59:42.859411 sshd[4175]: Connection closed by 10.0.0.1 port 37956 Sep 16 04:59:42.859823 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:42.866081 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:37956.service: Deactivated successfully. Sep 16 04:59:42.868370 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:59:42.869490 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:59:42.871124 systemd-logind[1543]: Removed session 15. Sep 16 04:59:47.877744 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:37958.service - OpenSSH per-connection server daemon (10.0.0.1:37958). Sep 16 04:59:47.948200 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 37958 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:47.950097 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:47.955404 systemd-logind[1543]: New session 16 of user core. Sep 16 04:59:47.970366 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:59:48.089751 sshd[4193]: Connection closed by 10.0.0.1 port 37958 Sep 16 04:59:48.090177 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:48.106158 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:37958.service: Deactivated successfully. Sep 16 04:59:48.108904 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:59:48.110425 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:59:48.114319 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:37966.service - OpenSSH per-connection server daemon (10.0.0.1:37966). Sep 16 04:59:48.115136 systemd-logind[1543]: Removed session 16. Sep 16 04:59:48.174601 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 37966 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:48.176216 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:48.181218 systemd-logind[1543]: New session 17 of user core. Sep 16 04:59:48.195425 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:59:48.476342 sshd[4210]: Connection closed by 10.0.0.1 port 37966 Sep 16 04:59:48.476937 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:48.492105 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:37966.service: Deactivated successfully. Sep 16 04:59:48.494872 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:59:48.495927 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:59:48.498757 systemd-logind[1543]: Removed session 17. Sep 16 04:59:48.500736 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:37976.service - OpenSSH per-connection server daemon (10.0.0.1:37976). Sep 16 04:59:48.564137 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 37976 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:48.565974 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:48.571313 systemd-logind[1543]: New session 18 of user core. Sep 16 04:59:48.589438 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:59:49.885616 sshd[4225]: Connection closed by 10.0.0.1 port 37976 Sep 16 04:59:49.886173 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:49.897492 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:37976.service: Deactivated successfully. Sep 16 04:59:49.900277 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:59:49.901835 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:59:49.910452 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:45448.service - OpenSSH per-connection server daemon (10.0.0.1:45448). Sep 16 04:59:49.913177 systemd-logind[1543]: Removed session 18. Sep 16 04:59:49.966963 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 45448 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:49.968836 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:49.973912 systemd-logind[1543]: New session 19 of user core. Sep 16 04:59:49.982342 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:59:50.242035 sshd[4247]: Connection closed by 10.0.0.1 port 45448 Sep 16 04:59:50.242520 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:50.252600 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:45448.service: Deactivated successfully. Sep 16 04:59:50.254831 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:59:50.257813 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:59:50.262775 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:45458.service - OpenSSH per-connection server daemon (10.0.0.1:45458). Sep 16 04:59:50.264541 systemd-logind[1543]: Removed session 19. Sep 16 04:59:50.334054 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 45458 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:50.336070 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:50.341360 systemd-logind[1543]: New session 20 of user core. Sep 16 04:59:50.350476 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:59:50.468592 sshd[4262]: Connection closed by 10.0.0.1 port 45458 Sep 16 04:59:50.469041 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:50.473480 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:45458.service: Deactivated successfully. Sep 16 04:59:50.475883 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:59:50.476737 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:59:50.478112 systemd-logind[1543]: Removed session 20. Sep 16 04:59:55.484136 systemd[1]: Started sshd@20-10.0.0.114:22-10.0.0.1:45462.service - OpenSSH per-connection server daemon (10.0.0.1:45462). Sep 16 04:59:55.545949 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 45462 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:59:55.548050 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:55.553590 systemd-logind[1543]: New session 21 of user core. Sep 16 04:59:55.560399 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:59:55.684073 sshd[4281]: Connection closed by 10.0.0.1 port 45462 Sep 16 04:59:55.684572 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:55.689525 systemd[1]: sshd@20-10.0.0.114:22-10.0.0.1:45462.service: Deactivated successfully. Sep 16 04:59:55.691975 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:59:55.692942 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:59:55.694445 systemd-logind[1543]: Removed session 21. Sep 16 04:59:56.199762 kubelet[2715]: E0916 04:59:56.199696 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:59:56.199762 kubelet[2715]: E0916 04:59:56.199696 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:00.197534 kubelet[2715]: E0916 05:00:00.197483 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:00.702291 systemd[1]: Started sshd@21-10.0.0.114:22-10.0.0.1:46612.service - OpenSSH per-connection server daemon (10.0.0.1:46612). Sep 16 05:00:00.772914 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 46612 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 05:00:00.775069 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:00.779865 systemd-logind[1543]: New session 22 of user core. Sep 16 05:00:00.790557 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:00:00.919730 sshd[4300]: Connection closed by 10.0.0.1 port 46612 Sep 16 05:00:00.920207 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:00.925738 systemd[1]: sshd@21-10.0.0.114:22-10.0.0.1:46612.service: Deactivated successfully. Sep 16 05:00:00.928673 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:00:00.930079 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:00:00.931735 systemd-logind[1543]: Removed session 22. Sep 16 05:00:05.937664 systemd[1]: Started sshd@22-10.0.0.114:22-10.0.0.1:46614.service - OpenSSH per-connection server daemon (10.0.0.1:46614). Sep 16 05:00:05.992837 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 46614 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 05:00:05.994326 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:05.998857 systemd-logind[1543]: New session 23 of user core. Sep 16 05:00:06.007357 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:00:06.128201 sshd[4316]: Connection closed by 10.0.0.1 port 46614 Sep 16 05:00:06.128620 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:06.134321 systemd[1]: sshd@22-10.0.0.114:22-10.0.0.1:46614.service: Deactivated successfully. Sep 16 05:00:06.137321 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:00:06.138363 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:00:06.140101 systemd-logind[1543]: Removed session 23. Sep 16 05:00:08.197379 kubelet[2715]: E0916 05:00:08.197311 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:10.198304 kubelet[2715]: E0916 05:00:10.198229 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:11.141856 systemd[1]: Started sshd@23-10.0.0.114:22-10.0.0.1:57242.service - OpenSSH per-connection server daemon (10.0.0.1:57242). Sep 16 05:00:11.200255 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 57242 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 05:00:11.202118 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:11.207703 systemd-logind[1543]: New session 24 of user core. Sep 16 05:00:11.222357 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:00:11.338779 sshd[4333]: Connection closed by 10.0.0.1 port 57242 Sep 16 05:00:11.340469 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:11.351454 systemd[1]: sshd@23-10.0.0.114:22-10.0.0.1:57242.service: Deactivated successfully. Sep 16 05:00:11.353803 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:00:11.355085 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:00:11.358343 systemd[1]: Started sshd@24-10.0.0.114:22-10.0.0.1:57246.service - OpenSSH per-connection server daemon (10.0.0.1:57246). Sep 16 05:00:11.359435 systemd-logind[1543]: Removed session 24. Sep 16 05:00:11.421373 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 57246 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 05:00:11.423619 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:11.429508 systemd-logind[1543]: New session 25 of user core. Sep 16 05:00:11.440475 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:00:12.848953 containerd[1573]: time="2025-09-16T05:00:12.848853995Z" level=info msg="StopContainer for \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" with timeout 30 (s)" Sep 16 05:00:12.860601 containerd[1573]: time="2025-09-16T05:00:12.860544415Z" level=info msg="Stop container \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" with signal terminated" Sep 16 05:00:12.876657 systemd[1]: cri-containerd-36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c.scope: Deactivated successfully. Sep 16 05:00:12.878673 containerd[1573]: time="2025-09-16T05:00:12.878619996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" id:\"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" pid:3139 exited_at:{seconds:1757998812 nanos:877888777}" Sep 16 05:00:12.878836 containerd[1573]: time="2025-09-16T05:00:12.878809486Z" level=info msg="received exit event container_id:\"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" id:\"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" pid:3139 exited_at:{seconds:1757998812 nanos:877888777}" Sep 16 05:00:12.898157 containerd[1573]: time="2025-09-16T05:00:12.898091330Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:00:12.898539 containerd[1573]: time="2025-09-16T05:00:12.898511517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" id:\"6edc33be9267557db70b4bcc05d731936f3298c480f3226b38a30660e30af904\" pid:4376 exited_at:{seconds:1757998812 nanos:898114784}" Sep 16 05:00:12.901717 containerd[1573]: time="2025-09-16T05:00:12.901673565Z" level=info msg="StopContainer for \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" with timeout 2 (s)" Sep 16 05:00:12.902054 containerd[1573]: time="2025-09-16T05:00:12.902016736Z" level=info msg="Stop container \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" with signal terminated" Sep 16 05:00:12.908082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c-rootfs.mount: Deactivated successfully. Sep 16 05:00:12.913491 systemd-networkd[1473]: lxc_health: Link DOWN Sep 16 05:00:12.913502 systemd-networkd[1473]: lxc_health: Lost carrier Sep 16 05:00:12.932935 containerd[1573]: time="2025-09-16T05:00:12.932879892Z" level=info msg="StopContainer for \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" returns successfully" Sep 16 05:00:12.934810 systemd[1]: cri-containerd-7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8.scope: Deactivated successfully. Sep 16 05:00:12.935827 systemd[1]: cri-containerd-7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8.scope: Consumed 7.255s CPU time, 126.5M memory peak, 200K read from disk, 13.3M written to disk. Sep 16 05:00:12.936664 containerd[1573]: time="2025-09-16T05:00:12.936627102Z" level=info msg="StopPodSandbox for \"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\"" Sep 16 05:00:12.936752 containerd[1573]: time="2025-09-16T05:00:12.936728585Z" level=info msg="Container to stop \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:12.937231 containerd[1573]: time="2025-09-16T05:00:12.937199479Z" level=info msg="received exit event container_id:\"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" id:\"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" pid:3373 exited_at:{seconds:1757998812 nanos:936762279}" Sep 16 05:00:12.937443 containerd[1573]: time="2025-09-16T05:00:12.937410881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" id:\"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" pid:3373 exited_at:{seconds:1757998812 nanos:936762279}" Sep 16 05:00:12.948729 systemd[1]: cri-containerd-b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3.scope: Deactivated successfully. Sep 16 05:00:12.951307 containerd[1573]: time="2025-09-16T05:00:12.951092584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\" id:\"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\" pid:2847 exit_status:137 exited_at:{seconds:1757998812 nanos:950497283}" Sep 16 05:00:12.964104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8-rootfs.mount: Deactivated successfully. Sep 16 05:00:12.985060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3-rootfs.mount: Deactivated successfully. Sep 16 05:00:13.012887 containerd[1573]: time="2025-09-16T05:00:13.012839850Z" level=info msg="StopContainer for \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" returns successfully" Sep 16 05:00:13.013716 containerd[1573]: time="2025-09-16T05:00:13.013608128Z" level=info msg="shim disconnected" id=b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3 namespace=k8s.io Sep 16 05:00:13.013716 containerd[1573]: time="2025-09-16T05:00:13.013645649Z" level=warning msg="cleaning up after shim disconnected" id=b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3 namespace=k8s.io Sep 16 05:00:13.016617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3-shm.mount: Deactivated successfully. Sep 16 05:00:13.043357 containerd[1573]: time="2025-09-16T05:00:13.013657723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:00:13.043576 containerd[1573]: time="2025-09-16T05:00:13.015060868Z" level=info msg="StopPodSandbox for \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\"" Sep 16 05:00:13.043576 containerd[1573]: time="2025-09-16T05:00:13.043487278Z" level=info msg="Container to stop \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:13.043576 containerd[1573]: time="2025-09-16T05:00:13.043502617Z" level=info msg="Container to stop \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:13.043576 containerd[1573]: time="2025-09-16T05:00:13.043511383Z" level=info msg="Container to stop \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:13.043576 containerd[1573]: time="2025-09-16T05:00:13.043520230Z" level=info msg="Container to stop \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:13.043576 containerd[1573]: time="2025-09-16T05:00:13.043529979Z" level=info msg="Container to stop \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:13.043810 containerd[1573]: time="2025-09-16T05:00:13.034819876Z" level=info msg="TearDown network for sandbox \"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\" successfully" Sep 16 05:00:13.043810 containerd[1573]: time="2025-09-16T05:00:13.043802095Z" level=info msg="StopPodSandbox for \"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\" returns successfully" Sep 16 05:00:13.047921 containerd[1573]: time="2025-09-16T05:00:13.047686924Z" level=info msg="received exit event sandbox_id:\"b6a702e3a79b81bc3e04c3855d90553a39c134fe64bbd1d2bca0c487422127e3\" exit_status:137 exited_at:{seconds:1757998812 nanos:950497283}" Sep 16 05:00:13.054330 systemd[1]: cri-containerd-a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35.scope: Deactivated successfully. Sep 16 05:00:13.061300 containerd[1573]: time="2025-09-16T05:00:13.061231809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" id:\"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" pid:2922 exit_status:137 exited_at:{seconds:1757998813 nanos:59626551}" Sep 16 05:00:13.093368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35-rootfs.mount: Deactivated successfully. Sep 16 05:00:13.098356 containerd[1573]: time="2025-09-16T05:00:13.098318408Z" level=info msg="shim disconnected" id=a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35 namespace=k8s.io Sep 16 05:00:13.098356 containerd[1573]: time="2025-09-16T05:00:13.098353425Z" level=warning msg="cleaning up after shim disconnected" id=a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35 namespace=k8s.io Sep 16 05:00:13.098482 containerd[1573]: time="2025-09-16T05:00:13.098362111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:00:13.115509 containerd[1573]: time="2025-09-16T05:00:13.114528496Z" level=info msg="received exit event sandbox_id:\"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" exit_status:137 exited_at:{seconds:1757998813 nanos:59626551}" Sep 16 05:00:13.115509 containerd[1573]: time="2025-09-16T05:00:13.114684241Z" level=info msg="TearDown network for sandbox \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" successfully" Sep 16 05:00:13.115509 containerd[1573]: time="2025-09-16T05:00:13.114705071Z" level=info msg="StopPodSandbox for \"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" returns successfully" Sep 16 05:00:13.115509 containerd[1573]: time="2025-09-16T05:00:13.114539517Z" level=error msg="Failed to handle event container_id:\"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" id:\"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" pid:2922 exit_status:137 exited_at:{seconds:1757998813 nanos:59626551} for a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 16 05:00:13.248533 kubelet[2715]: I0916 05:00:13.248429 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750dfedb-3c9b-4e41-9960-d84c76990ab1-cilium-config-path\") pod \"750dfedb-3c9b-4e41-9960-d84c76990ab1\" (UID: \"750dfedb-3c9b-4e41-9960-d84c76990ab1\") " Sep 16 05:00:13.248533 kubelet[2715]: I0916 05:00:13.248507 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-run\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.248533 kubelet[2715]: I0916 05:00:13.248526 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hostproc\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.248533 kubelet[2715]: I0916 05:00:13.248543 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-cgroup\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.248533 kubelet[2715]: I0916 05:00:13.248564 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-lib-modules\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249398 kubelet[2715]: I0916 05:00:13.248580 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-net\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249398 kubelet[2715]: I0916 05:00:13.248605 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c562d\" (UniqueName: \"kubernetes.io/projected/750dfedb-3c9b-4e41-9960-d84c76990ab1-kube-api-access-c562d\") pod \"750dfedb-3c9b-4e41-9960-d84c76990ab1\" (UID: \"750dfedb-3c9b-4e41-9960-d84c76990ab1\") " Sep 16 05:00:13.249398 kubelet[2715]: I0916 05:00:13.248625 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hubble-tls\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249398 kubelet[2715]: I0916 05:00:13.248638 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cni-path\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249398 kubelet[2715]: I0916 05:00:13.248655 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k52hs\" (UniqueName: \"kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-kube-api-access-k52hs\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249398 kubelet[2715]: I0916 05:00:13.248667 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-bpf-maps\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249615 kubelet[2715]: I0916 05:00:13.248691 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-kernel\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249615 kubelet[2715]: I0916 05:00:13.248673 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.249615 kubelet[2715]: I0916 05:00:13.248712 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-xtables-lock\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249615 kubelet[2715]: I0916 05:00:13.248823 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-etc-cni-netd\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249615 kubelet[2715]: I0916 05:00:13.248858 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-clustermesh-secrets\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249615 kubelet[2715]: I0916 05:00:13.248879 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-config-path\") pod \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\" (UID: \"80f0e3ab-edd0-4e25-98da-f8ebd78284e6\") " Sep 16 05:00:13.249832 kubelet[2715]: I0916 05:00:13.248938 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.250225 kubelet[2715]: I0916 05:00:13.248765 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.250732 kubelet[2715]: I0916 05:00:13.248785 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.250732 kubelet[2715]: I0916 05:00:13.248801 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.250732 kubelet[2715]: I0916 05:00:13.248821 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.250732 kubelet[2715]: I0916 05:00:13.248835 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.250732 kubelet[2715]: I0916 05:00:13.250000 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.253217 kubelet[2715]: I0916 05:00:13.253006 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 05:00:13.253217 kubelet[2715]: I0916 05:00:13.253123 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750dfedb-3c9b-4e41-9960-d84c76990ab1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "750dfedb-3c9b-4e41-9960-d84c76990ab1" (UID: "750dfedb-3c9b-4e41-9960-d84c76990ab1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 05:00:13.253441 kubelet[2715]: I0916 05:00:13.253256 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.253441 kubelet[2715]: I0916 05:00:13.253298 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.253441 kubelet[2715]: I0916 05:00:13.253288 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 05:00:13.254511 kubelet[2715]: I0916 05:00:13.254472 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750dfedb-3c9b-4e41-9960-d84c76990ab1-kube-api-access-c562d" (OuterVolumeSpecName: "kube-api-access-c562d") pod "750dfedb-3c9b-4e41-9960-d84c76990ab1" (UID: "750dfedb-3c9b-4e41-9960-d84c76990ab1"). InnerVolumeSpecName "kube-api-access-c562d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 05:00:13.255233 kubelet[2715]: I0916 05:00:13.255164 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 16 05:00:13.257025 kubelet[2715]: I0916 05:00:13.256973 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 05:00:13.257723 kubelet[2715]: I0916 05:00:13.257497 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-kube-api-access-k52hs" (OuterVolumeSpecName: "kube-api-access-k52hs") pod "80f0e3ab-edd0-4e25-98da-f8ebd78284e6" (UID: "80f0e3ab-edd0-4e25-98da-f8ebd78284e6"). InnerVolumeSpecName "kube-api-access-k52hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 05:00:13.349381 kubelet[2715]: I0916 05:00:13.349277 2715 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349381 kubelet[2715]: I0916 05:00:13.349349 2715 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349381 kubelet[2715]: I0916 05:00:13.349365 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349381 kubelet[2715]: I0916 05:00:13.349379 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750dfedb-3c9b-4e41-9960-d84c76990ab1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349381 kubelet[2715]: I0916 05:00:13.349394 2715 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349381 kubelet[2715]: I0916 05:00:13.349408 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349381 kubelet[2715]: I0916 05:00:13.349419 2715 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349432 2715 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349445 2715 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349457 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c562d\" (UniqueName: \"kubernetes.io/projected/750dfedb-3c9b-4e41-9960-d84c76990ab1-kube-api-access-c562d\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349524 2715 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349539 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k52hs\" (UniqueName: \"kubernetes.io/projected/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-kube-api-access-k52hs\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349551 2715 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349562 2715 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.349820 kubelet[2715]: I0916 05:00:13.349574 2715 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80f0e3ab-edd0-4e25-98da-f8ebd78284e6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 16 05:00:13.478790 kubelet[2715]: I0916 05:00:13.478435 2715 scope.go:117] "RemoveContainer" containerID="7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8" Sep 16 05:00:13.485750 containerd[1573]: time="2025-09-16T05:00:13.485694958Z" level=info msg="RemoveContainer for \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\"" Sep 16 05:00:13.491737 systemd[1]: Removed slice kubepods-burstable-pod80f0e3ab_edd0_4e25_98da_f8ebd78284e6.slice - libcontainer container kubepods-burstable-pod80f0e3ab_edd0_4e25_98da_f8ebd78284e6.slice. Sep 16 05:00:13.491886 systemd[1]: kubepods-burstable-pod80f0e3ab_edd0_4e25_98da_f8ebd78284e6.slice: Consumed 7.398s CPU time, 126.9M memory peak, 208K read from disk, 13.3M written to disk. Sep 16 05:00:13.497646 systemd[1]: Removed slice kubepods-besteffort-pod750dfedb_3c9b_4e41_9960_d84c76990ab1.slice - libcontainer container kubepods-besteffort-pod750dfedb_3c9b_4e41_9960_d84c76990ab1.slice. Sep 16 05:00:13.505330 containerd[1573]: time="2025-09-16T05:00:13.505259919Z" level=info msg="RemoveContainer for \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" returns successfully" Sep 16 05:00:13.505717 kubelet[2715]: I0916 05:00:13.505678 2715 scope.go:117] "RemoveContainer" containerID="6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631" Sep 16 05:00:13.507544 containerd[1573]: time="2025-09-16T05:00:13.507502377Z" level=info msg="RemoveContainer for \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\"" Sep 16 05:00:13.514598 containerd[1573]: time="2025-09-16T05:00:13.514548671Z" level=info msg="RemoveContainer for \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" returns successfully" Sep 16 05:00:13.515443 kubelet[2715]: I0916 05:00:13.515405 2715 scope.go:117] "RemoveContainer" containerID="07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579" Sep 16 05:00:13.519221 containerd[1573]: time="2025-09-16T05:00:13.519132897Z" level=info msg="RemoveContainer for \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\"" Sep 16 05:00:13.528385 containerd[1573]: time="2025-09-16T05:00:13.528330706Z" level=info msg="RemoveContainer for \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" returns successfully" Sep 16 05:00:13.528692 kubelet[2715]: I0916 05:00:13.528646 2715 scope.go:117] "RemoveContainer" containerID="878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df" Sep 16 05:00:13.530524 containerd[1573]: time="2025-09-16T05:00:13.530481952Z" level=info msg="RemoveContainer for \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\"" Sep 16 05:00:13.550222 containerd[1573]: time="2025-09-16T05:00:13.550144708Z" level=info msg="RemoveContainer for \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" returns successfully" Sep 16 05:00:13.550499 kubelet[2715]: I0916 05:00:13.550463 2715 scope.go:117] "RemoveContainer" containerID="388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690" Sep 16 05:00:13.552351 containerd[1573]: time="2025-09-16T05:00:13.552301143Z" level=info msg="RemoveContainer for \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\"" Sep 16 05:00:13.557119 containerd[1573]: time="2025-09-16T05:00:13.557070130Z" level=info msg="RemoveContainer for \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" returns successfully" Sep 16 05:00:13.557439 kubelet[2715]: I0916 05:00:13.557338 2715 scope.go:117] "RemoveContainer" containerID="7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8" Sep 16 05:00:13.568234 containerd[1573]: time="2025-09-16T05:00:13.557548670Z" level=error msg="ContainerStatus for \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\": not found" Sep 16 05:00:13.569350 kubelet[2715]: E0916 05:00:13.569298 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\": not found" containerID="7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8" Sep 16 05:00:13.569477 kubelet[2715]: I0916 05:00:13.569357 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8"} err="failed to get container status \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e56a81e9d7edc9b4221910bbb5b62cde2b774049d2b3196bb2f258a5fa670b8\": not found" Sep 16 05:00:13.569477 kubelet[2715]: I0916 05:00:13.569470 2715 scope.go:117] "RemoveContainer" containerID="6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631" Sep 16 05:00:13.569913 containerd[1573]: time="2025-09-16T05:00:13.569830617Z" level=error msg="ContainerStatus for \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\": not found" Sep 16 05:00:13.570089 kubelet[2715]: E0916 05:00:13.570002 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\": not found" containerID="6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631" Sep 16 05:00:13.570089 kubelet[2715]: I0916 05:00:13.570031 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631"} err="failed to get container status \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d8f0734cde937ee05be241f357fbf93f6947a3bbabd38623b9c950793db0631\": not found" Sep 16 05:00:13.570089 kubelet[2715]: I0916 05:00:13.570048 2715 scope.go:117] "RemoveContainer" containerID="07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579" Sep 16 05:00:13.570404 containerd[1573]: time="2025-09-16T05:00:13.570356516Z" level=error msg="ContainerStatus for \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\": not found" Sep 16 05:00:13.570549 kubelet[2715]: E0916 05:00:13.570508 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\": not found" containerID="07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579" Sep 16 05:00:13.570596 kubelet[2715]: I0916 05:00:13.570541 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579"} err="failed to get container status \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\": rpc error: code = NotFound desc = an error occurred when try to find container \"07ff258f49eee1ca8e9397deacd42530e1a93f67b4c2dbf4ee960b54b9536579\": not found" Sep 16 05:00:13.570596 kubelet[2715]: I0916 05:00:13.570562 2715 scope.go:117] "RemoveContainer" containerID="878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df" Sep 16 05:00:13.570766 containerd[1573]: time="2025-09-16T05:00:13.570726869Z" level=error msg="ContainerStatus for \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\": not found" Sep 16 05:00:13.570863 kubelet[2715]: E0916 05:00:13.570837 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\": not found" containerID="878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df" Sep 16 05:00:13.570909 kubelet[2715]: I0916 05:00:13.570863 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df"} err="failed to get container status \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\": rpc error: code = NotFound desc = an error occurred when try to find container \"878d9e26edf780b9da4c8705ec7b2c0a611a2b8ed620bbc1bbb6438f549c13df\": not found" Sep 16 05:00:13.570909 kubelet[2715]: I0916 05:00:13.570880 2715 scope.go:117] "RemoveContainer" containerID="388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690" Sep 16 05:00:13.571101 containerd[1573]: time="2025-09-16T05:00:13.571063839Z" level=error msg="ContainerStatus for \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\": not found" Sep 16 05:00:13.571317 kubelet[2715]: E0916 05:00:13.571272 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\": not found" containerID="388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690" Sep 16 05:00:13.571379 kubelet[2715]: I0916 05:00:13.571314 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690"} err="failed to get container status \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\": rpc error: code = NotFound desc = an error occurred when try to find container \"388fa349f8f7562340a38787a1ddb8f8669b5577944f2c18059acc49034d1690\": not found" Sep 16 05:00:13.571379 kubelet[2715]: I0916 05:00:13.571345 2715 scope.go:117] "RemoveContainer" containerID="36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c" Sep 16 05:00:13.572970 containerd[1573]: time="2025-09-16T05:00:13.572937388Z" level=info msg="RemoveContainer for \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\"" Sep 16 05:00:13.576860 containerd[1573]: time="2025-09-16T05:00:13.576813429Z" level=info msg="RemoveContainer for \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" returns successfully" Sep 16 05:00:13.577021 kubelet[2715]: I0916 05:00:13.576980 2715 scope.go:117] "RemoveContainer" containerID="36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c" Sep 16 05:00:13.577169 containerd[1573]: time="2025-09-16T05:00:13.577140470Z" level=error msg="ContainerStatus for \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\": not found" Sep 16 05:00:13.577365 kubelet[2715]: E0916 05:00:13.577309 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\": not found" containerID="36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c" Sep 16 05:00:13.577424 kubelet[2715]: I0916 05:00:13.577362 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c"} err="failed to get container status \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"36807521d9a9342896a8eb9c718c9cf2bfc4989e40575904a1ac49e84e3deb2c\": not found" Sep 16 05:00:13.906933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35-shm.mount: Deactivated successfully. Sep 16 05:00:13.907062 systemd[1]: var-lib-kubelet-pods-80f0e3ab\x2dedd0\x2d4e25\x2d98da\x2df8ebd78284e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk52hs.mount: Deactivated successfully. Sep 16 05:00:13.907145 systemd[1]: var-lib-kubelet-pods-750dfedb\x2d3c9b\x2d4e41\x2d9960\x2dd84c76990ab1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc562d.mount: Deactivated successfully. Sep 16 05:00:13.907272 systemd[1]: var-lib-kubelet-pods-80f0e3ab\x2dedd0\x2d4e25\x2d98da\x2df8ebd78284e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 05:00:13.907358 systemd[1]: var-lib-kubelet-pods-80f0e3ab\x2dedd0\x2d4e25\x2d98da\x2df8ebd78284e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 05:00:14.553739 containerd[1573]: time="2025-09-16T05:00:14.553618519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" id:\"a297936138a0b8523d38fb1b3274d103347533b2def9ece7a95db2831f57fd35\" pid:2922 exit_status:137 exited_at:{seconds:1757998813 nanos:59626551}" Sep 16 05:00:14.803382 sshd[4349]: Connection closed by 10.0.0.1 port 57246 Sep 16 05:00:14.804023 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:14.813689 systemd[1]: sshd@24-10.0.0.114:22-10.0.0.1:57246.service: Deactivated successfully. Sep 16 05:00:14.815995 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:00:14.817145 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:00:14.820885 systemd[1]: Started sshd@25-10.0.0.114:22-10.0.0.1:57254.service - OpenSSH per-connection server daemon (10.0.0.1:57254). Sep 16 05:00:14.821594 systemd-logind[1543]: Removed session 25. Sep 16 05:00:14.886475 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 57254 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 05:00:14.888207 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:14.893908 systemd-logind[1543]: New session 26 of user core. Sep 16 05:00:14.904620 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 05:00:15.200078 kubelet[2715]: I0916 05:00:15.199904 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750dfedb-3c9b-4e41-9960-d84c76990ab1" path="/var/lib/kubelet/pods/750dfedb-3c9b-4e41-9960-d84c76990ab1/volumes" Sep 16 05:00:15.200764 kubelet[2715]: I0916 05:00:15.200732 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f0e3ab-edd0-4e25-98da-f8ebd78284e6" path="/var/lib/kubelet/pods/80f0e3ab-edd0-4e25-98da-f8ebd78284e6/volumes" Sep 16 05:00:15.277050 kubelet[2715]: E0916 05:00:15.276997 2715 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 05:00:15.346874 sshd[4507]: Connection closed by 10.0.0.1 port 57254 Sep 16 05:00:15.348316 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:15.360380 systemd[1]: sshd@25-10.0.0.114:22-10.0.0.1:57254.service: Deactivated successfully. Sep 16 05:00:15.363487 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 05:00:15.365898 kubelet[2715]: E0916 05:00:15.365139 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="750dfedb-3c9b-4e41-9960-d84c76990ab1" containerName="cilium-operator" Sep 16 05:00:15.365898 kubelet[2715]: E0916 05:00:15.365992 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80f0e3ab-edd0-4e25-98da-f8ebd78284e6" containerName="mount-cgroup" Sep 16 05:00:15.365898 kubelet[2715]: E0916 05:00:15.366914 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80f0e3ab-edd0-4e25-98da-f8ebd78284e6" containerName="apply-sysctl-overwrites" Sep 16 05:00:15.365898 kubelet[2715]: E0916 05:00:15.366930 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80f0e3ab-edd0-4e25-98da-f8ebd78284e6" containerName="mount-bpf-fs" Sep 16 05:00:15.365898 kubelet[2715]: E0916 05:00:15.366937 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80f0e3ab-edd0-4e25-98da-f8ebd78284e6" containerName="clean-cilium-state" Sep 16 05:00:15.365898 kubelet[2715]: E0916 05:00:15.366946 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80f0e3ab-edd0-4e25-98da-f8ebd78284e6" containerName="cilium-agent" Sep 16 05:00:15.365461 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Sep 16 05:00:15.370489 systemd[1]: Started sshd@26-10.0.0.114:22-10.0.0.1:57258.service - OpenSSH per-connection server daemon (10.0.0.1:57258). Sep 16 05:00:15.373278 systemd-logind[1543]: Removed session 26. Sep 16 05:00:15.376252 kubelet[2715]: I0916 05:00:15.375041 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="750dfedb-3c9b-4e41-9960-d84c76990ab1" containerName="cilium-operator" Sep 16 05:00:15.377178 kubelet[2715]: I0916 05:00:15.376496 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f0e3ab-edd0-4e25-98da-f8ebd78284e6" containerName="cilium-agent" Sep 16 05:00:15.389065 systemd[1]: Created slice kubepods-burstable-pod0e79efe1_0842_4b41_b238_4a4107a2a23b.slice - libcontainer container kubepods-burstable-pod0e79efe1_0842_4b41_b238_4a4107a2a23b.slice. Sep 16 05:00:15.438208 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 57258 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 05:00:15.440004 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:15.445094 systemd-logind[1543]: New session 27 of user core. Sep 16 05:00:15.452409 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 05:00:15.506497 sshd[4522]: Connection closed by 10.0.0.1 port 57258 Sep 16 05:00:15.506878 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:15.524156 systemd[1]: sshd@26-10.0.0.114:22-10.0.0.1:57258.service: Deactivated successfully. Sep 16 05:00:15.526721 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 05:00:15.527650 systemd-logind[1543]: Session 27 logged out. Waiting for processes to exit. Sep 16 05:00:15.531465 systemd[1]: Started sshd@27-10.0.0.114:22-10.0.0.1:57274.service - OpenSSH per-connection server daemon (10.0.0.1:57274). Sep 16 05:00:15.532127 systemd-logind[1543]: Removed session 27. Sep 16 05:00:15.563454 kubelet[2715]: I0916 05:00:15.563393 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-host-proc-sys-kernel\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563454 kubelet[2715]: I0916 05:00:15.563439 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-cilium-cgroup\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563610 kubelet[2715]: I0916 05:00:15.563514 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-etc-cni-netd\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563610 kubelet[2715]: I0916 05:00:15.563533 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e79efe1-0842-4b41-b238-4a4107a2a23b-cilium-config-path\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563610 kubelet[2715]: I0916 05:00:15.563547 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e79efe1-0842-4b41-b238-4a4107a2a23b-hubble-tls\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563610 kubelet[2715]: I0916 05:00:15.563564 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm2m4\" (UniqueName: \"kubernetes.io/projected/0e79efe1-0842-4b41-b238-4a4107a2a23b-kube-api-access-sm2m4\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563610 kubelet[2715]: I0916 05:00:15.563580 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-bpf-maps\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563610 kubelet[2715]: I0916 05:00:15.563594 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-xtables-lock\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563840 kubelet[2715]: I0916 05:00:15.563614 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e79efe1-0842-4b41-b238-4a4107a2a23b-clustermesh-secrets\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563840 kubelet[2715]: I0916 05:00:15.563631 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-host-proc-sys-net\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563840 kubelet[2715]: I0916 05:00:15.563729 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-cni-path\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563840 kubelet[2715]: I0916 05:00:15.563770 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-lib-modules\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563840 kubelet[2715]: I0916 05:00:15.563806 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-cilium-run\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563840 kubelet[2715]: I0916 05:00:15.563841 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e79efe1-0842-4b41-b238-4a4107a2a23b-hostproc\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.563978 kubelet[2715]: I0916 05:00:15.563863 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e79efe1-0842-4b41-b238-4a4107a2a23b-cilium-ipsec-secrets\") pod \"cilium-cnpvg\" (UID: \"0e79efe1-0842-4b41-b238-4a4107a2a23b\") " pod="kube-system/cilium-cnpvg" Sep 16 05:00:15.587517 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 57274 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 05:00:15.589588 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:15.595049 systemd-logind[1543]: New session 28 of user core. Sep 16 05:00:15.605319 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 05:00:15.694116 kubelet[2715]: E0916 05:00:15.694049 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:15.698952 containerd[1573]: time="2025-09-16T05:00:15.698899280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnpvg,Uid:0e79efe1-0842-4b41-b238-4a4107a2a23b,Namespace:kube-system,Attempt:0,}" Sep 16 05:00:15.720047 containerd[1573]: time="2025-09-16T05:00:15.719870988Z" level=info msg="connecting to shim 432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04" address="unix:///run/containerd/s/1e091efc86c3957c10d46491999d5e082cb5174534f28e5c867c00b83d1d6cc6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:00:15.745369 systemd[1]: Started cri-containerd-432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04.scope - libcontainer container 432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04. Sep 16 05:00:15.773543 containerd[1573]: time="2025-09-16T05:00:15.773485529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnpvg,Uid:0e79efe1-0842-4b41-b238-4a4107a2a23b,Namespace:kube-system,Attempt:0,} returns sandbox id \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\"" Sep 16 05:00:15.774280 kubelet[2715]: E0916 05:00:15.774176 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:15.776367 containerd[1573]: time="2025-09-16T05:00:15.776321262Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:00:15.784429 containerd[1573]: time="2025-09-16T05:00:15.784374398Z" level=info msg="Container d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:15.793052 containerd[1573]: time="2025-09-16T05:00:15.792979993Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf\"" Sep 16 05:00:15.793706 containerd[1573]: time="2025-09-16T05:00:15.793664051Z" level=info msg="StartContainer for \"d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf\"" Sep 16 05:00:15.794684 containerd[1573]: time="2025-09-16T05:00:15.794657568Z" level=info msg="connecting to shim d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf" address="unix:///run/containerd/s/1e091efc86c3957c10d46491999d5e082cb5174534f28e5c867c00b83d1d6cc6" protocol=ttrpc version=3 Sep 16 05:00:15.814608 systemd[1]: Started cri-containerd-d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf.scope - libcontainer container d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf. Sep 16 05:00:15.848360 containerd[1573]: time="2025-09-16T05:00:15.848284351Z" level=info msg="StartContainer for \"d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf\" returns successfully" Sep 16 05:00:15.858473 systemd[1]: cri-containerd-d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf.scope: Deactivated successfully. Sep 16 05:00:15.860860 containerd[1573]: time="2025-09-16T05:00:15.860828402Z" level=info msg="received exit event container_id:\"d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf\" id:\"d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf\" pid:4605 exited_at:{seconds:1757998815 nanos:860594748}" Sep 16 05:00:15.860985 containerd[1573]: time="2025-09-16T05:00:15.860967346Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf\" id:\"d533a897853c1c32e9e6aff7a24474e09c55a801d93258829c2f707326c7b0cf\" pid:4605 exited_at:{seconds:1757998815 nanos:860594748}" Sep 16 05:00:16.498857 kubelet[2715]: E0916 05:00:16.498817 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:16.500438 containerd[1573]: time="2025-09-16T05:00:16.500389600Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:00:16.511968 containerd[1573]: time="2025-09-16T05:00:16.511922937Z" level=info msg="Container 5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:16.519086 containerd[1573]: time="2025-09-16T05:00:16.519034461Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634\"" Sep 16 05:00:16.519497 containerd[1573]: time="2025-09-16T05:00:16.519476069Z" level=info msg="StartContainer for \"5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634\"" Sep 16 05:00:16.520255 containerd[1573]: time="2025-09-16T05:00:16.520230332Z" level=info msg="connecting to shim 5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634" address="unix:///run/containerd/s/1e091efc86c3957c10d46491999d5e082cb5174534f28e5c867c00b83d1d6cc6" protocol=ttrpc version=3 Sep 16 05:00:16.545544 systemd[1]: Started cri-containerd-5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634.scope - libcontainer container 5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634. Sep 16 05:00:16.582992 containerd[1573]: time="2025-09-16T05:00:16.582943679Z" level=info msg="StartContainer for \"5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634\" returns successfully" Sep 16 05:00:16.590028 systemd[1]: cri-containerd-5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634.scope: Deactivated successfully. Sep 16 05:00:16.590592 containerd[1573]: time="2025-09-16T05:00:16.590549563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634\" id:\"5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634\" pid:4650 exited_at:{seconds:1757998816 nanos:590165034}" Sep 16 05:00:16.590905 containerd[1573]: time="2025-09-16T05:00:16.590611110Z" level=info msg="received exit event container_id:\"5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634\" id:\"5c3ad25098cf5a0fa63534754d9b4261c26a3511ffc19bb0a1c0fd1e38262634\" pid:4650 exited_at:{seconds:1757998816 nanos:590165034}" Sep 16 05:00:17.160177 kubelet[2715]: I0916 05:00:17.160100 2715 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T05:00:17Z","lastTransitionTime":"2025-09-16T05:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 05:00:17.503832 kubelet[2715]: E0916 05:00:17.503766 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:17.506047 containerd[1573]: time="2025-09-16T05:00:17.505987046Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:00:17.518433 containerd[1573]: time="2025-09-16T05:00:17.518367614Z" level=info msg="Container 29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:17.530228 containerd[1573]: time="2025-09-16T05:00:17.530154144Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f\"" Sep 16 05:00:17.532991 containerd[1573]: time="2025-09-16T05:00:17.532948856Z" level=info msg="StartContainer for \"29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f\"" Sep 16 05:00:17.537063 containerd[1573]: time="2025-09-16T05:00:17.536940459Z" level=info msg="connecting to shim 29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f" address="unix:///run/containerd/s/1e091efc86c3957c10d46491999d5e082cb5174534f28e5c867c00b83d1d6cc6" protocol=ttrpc version=3 Sep 16 05:00:17.590404 systemd[1]: Started cri-containerd-29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f.scope - libcontainer container 29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f. Sep 16 05:00:17.646495 systemd[1]: cri-containerd-29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f.scope: Deactivated successfully. Sep 16 05:00:17.647547 containerd[1573]: time="2025-09-16T05:00:17.647413934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f\" id:\"29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f\" pid:4694 exited_at:{seconds:1757998817 nanos:646943232}" Sep 16 05:00:17.647547 containerd[1573]: time="2025-09-16T05:00:17.647434233Z" level=info msg="received exit event container_id:\"29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f\" id:\"29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f\" pid:4694 exited_at:{seconds:1757998817 nanos:646943232}" Sep 16 05:00:17.648289 containerd[1573]: time="2025-09-16T05:00:17.648198443Z" level=info msg="StartContainer for \"29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f\" returns successfully" Sep 16 05:00:17.673711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29f1f18771a0744453e23e1e4a32f6a84d22652832ebbe6700af87c0fa02809f-rootfs.mount: Deactivated successfully. Sep 16 05:00:18.512721 kubelet[2715]: E0916 05:00:18.512672 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:18.514493 containerd[1573]: time="2025-09-16T05:00:18.514451548Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:00:18.527668 containerd[1573]: time="2025-09-16T05:00:18.527415736Z" level=info msg="Container e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:18.539494 containerd[1573]: time="2025-09-16T05:00:18.539440002Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9\"" Sep 16 05:00:18.540048 containerd[1573]: time="2025-09-16T05:00:18.539940702Z" level=info msg="StartContainer for \"e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9\"" Sep 16 05:00:18.540743 containerd[1573]: time="2025-09-16T05:00:18.540720802Z" level=info msg="connecting to shim e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9" address="unix:///run/containerd/s/1e091efc86c3957c10d46491999d5e082cb5174534f28e5c867c00b83d1d6cc6" protocol=ttrpc version=3 Sep 16 05:00:18.572432 systemd[1]: Started cri-containerd-e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9.scope - libcontainer container e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9. Sep 16 05:00:18.604637 systemd[1]: cri-containerd-e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9.scope: Deactivated successfully. Sep 16 05:00:18.605122 containerd[1573]: time="2025-09-16T05:00:18.605070781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9\" id:\"e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9\" pid:4733 exited_at:{seconds:1757998818 nanos:604695760}" Sep 16 05:00:18.606757 containerd[1573]: time="2025-09-16T05:00:18.606697607Z" level=info msg="received exit event container_id:\"e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9\" id:\"e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9\" pid:4733 exited_at:{seconds:1757998818 nanos:604695760}" Sep 16 05:00:18.635347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9-rootfs.mount: Deactivated successfully. Sep 16 05:00:18.640585 containerd[1573]: time="2025-09-16T05:00:18.640505539Z" level=info msg="StartContainer for \"e84c00e83ec8aa7096ff944fad71d866a117e4ee2319b8317280b02c7c42eaf9\" returns successfully" Sep 16 05:00:18.646006 containerd[1573]: time="2025-09-16T05:00:18.641317048Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/1e091efc86c3957c10d46491999d5e082cb5174534f28e5c867c00b83d1d6cc6->@: write: broken pipe" runtime=io.containerd.runc.v2 Sep 16 05:00:19.522146 kubelet[2715]: E0916 05:00:19.522095 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:19.525100 containerd[1573]: time="2025-09-16T05:00:19.525042654Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:00:19.576643 containerd[1573]: time="2025-09-16T05:00:19.576275881Z" level=info msg="Container a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:19.580909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682016188.mount: Deactivated successfully. Sep 16 05:00:19.591687 containerd[1573]: time="2025-09-16T05:00:19.591622452Z" level=info msg="CreateContainer within sandbox \"432a7856466ab760e8d3393f3b8cf5749b0fc058cfa21a0bf438fa4ce35e7d04\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\"" Sep 16 05:00:19.592420 containerd[1573]: time="2025-09-16T05:00:19.592377874Z" level=info msg="StartContainer for \"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\"" Sep 16 05:00:19.593460 containerd[1573]: time="2025-09-16T05:00:19.593431672Z" level=info msg="connecting to shim a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9" address="unix:///run/containerd/s/1e091efc86c3957c10d46491999d5e082cb5174534f28e5c867c00b83d1d6cc6" protocol=ttrpc version=3 Sep 16 05:00:19.621511 systemd[1]: Started cri-containerd-a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9.scope - libcontainer container a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9. Sep 16 05:00:19.672554 containerd[1573]: time="2025-09-16T05:00:19.672490612Z" level=info msg="StartContainer for \"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\" returns successfully" Sep 16 05:00:19.744574 containerd[1573]: time="2025-09-16T05:00:19.744527848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\" id:\"0bf21759f5619fc97bf8c884923560455c6497ad016ed59d4775abf1cd00fe76\" pid:4800 exited_at:{seconds:1757998819 nanos:744177735}" Sep 16 05:00:20.144234 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 16 05:00:20.527711 kubelet[2715]: E0916 05:00:20.527675 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:21.695939 kubelet[2715]: E0916 05:00:21.695873 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:22.048270 containerd[1573]: time="2025-09-16T05:00:22.048200791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\" id:\"7833290603e5eed1a667e74c4de79cf2b7398754e493e2f55480ebbb9533b3a2\" pid:4961 exit_status:1 exited_at:{seconds:1757998822 nanos:46287155}" Sep 16 05:00:23.443370 systemd-networkd[1473]: lxc_health: Link UP Sep 16 05:00:23.443684 systemd-networkd[1473]: lxc_health: Gained carrier Sep 16 05:00:23.699245 kubelet[2715]: E0916 05:00:23.698348 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:23.717592 kubelet[2715]: I0916 05:00:23.715728 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cnpvg" podStartSLOduration=8.715710093 podStartE2EDuration="8.715710093s" podCreationTimestamp="2025-09-16 05:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:00:20.54243692 +0000 UTC m=+95.459489752" watchObservedRunningTime="2025-09-16 05:00:23.715710093 +0000 UTC m=+98.632762905" Sep 16 05:00:24.184179 containerd[1573]: time="2025-09-16T05:00:24.184107249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\" id:\"ce845083676f83a7be7f1dacbddcb6a3b3b5b8c259a95cfc27d04ba77bd73abf\" pid:5332 exited_at:{seconds:1757998824 nanos:183567566}" Sep 16 05:00:24.194490 kubelet[2715]: E0916 05:00:24.194438 2715 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47942->127.0.0.1:40947: write tcp 127.0.0.1:47942->127.0.0.1:40947: write: broken pipe Sep 16 05:00:24.453623 systemd-networkd[1473]: lxc_health: Gained IPv6LL Sep 16 05:00:24.537220 kubelet[2715]: E0916 05:00:24.536391 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 05:00:26.306016 containerd[1573]: time="2025-09-16T05:00:26.305955982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\" id:\"ab30a611dcedd2be826f16dbb60650a968935db758bd722c3a467f21a6cbbb77\" pid:5367 exited_at:{seconds:1757998826 nanos:305451327}" Sep 16 05:00:28.400153 containerd[1573]: time="2025-09-16T05:00:28.400090207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\" id:\"93a0edd33314d41a88737a0b673c8ea149ccf7473b27a39328009a8464d81f11\" pid:5399 exited_at:{seconds:1757998828 nanos:399738732}" Sep 16 05:00:30.527330 containerd[1573]: time="2025-09-16T05:00:30.527271541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901cadc0c40c1accd4e91a432818621582c8a430c171917f0c618c8419687c9\" id:\"1cda71476e304a2530bb260e2ab2e1b258b7d2bca80baa07d1b98ea617d3300d\" pid:5423 exited_at:{seconds:1757998830 nanos:526823615}" Sep 16 05:00:30.534534 sshd[4533]: Connection closed by 10.0.0.1 port 57274 Sep 16 05:00:30.535138 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:30.541114 systemd[1]: sshd@27-10.0.0.114:22-10.0.0.1:57274.service: Deactivated successfully. Sep 16 05:00:30.543489 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 05:00:30.544303 systemd-logind[1543]: Session 28 logged out. Waiting for processes to exit. Sep 16 05:00:30.545849 systemd-logind[1543]: Removed session 28.