Sep 16 04:50:45.897559 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:50:45.897588 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:50:45.897603 kernel: BIOS-provided physical RAM map: Sep 16 04:50:45.897612 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:50:45.897621 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 16 04:50:45.897630 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 16 04:50:45.897640 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 16 04:50:45.897649 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 16 04:50:45.897662 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 16 04:50:45.897674 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 16 04:50:45.897684 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 16 04:50:45.897692 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 16 04:50:45.897726 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 16 04:50:45.897735 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 16 04:50:45.897746 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 16 04:50:45.897759 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 16 04:50:45.897773 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 16 04:50:45.897783 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 16 04:50:45.897792 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 16 04:50:45.897802 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 16 04:50:45.897811 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 16 04:50:45.897821 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 16 04:50:45.897830 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 04:50:45.897839 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:50:45.897849 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 16 04:50:45.897863 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 04:50:45.897874 kernel: NX (Execute Disable) protection: active Sep 16 04:50:45.897884 kernel: APIC: Static calls initialized Sep 16 04:50:45.897894 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 16 04:50:45.897904 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 16 04:50:45.897913 kernel: extended physical RAM map: Sep 16 04:50:45.897922 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:50:45.897931 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 16 04:50:45.897940 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 16 04:50:45.897949 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 16 04:50:45.897959 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 16 04:50:45.897971 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 16 04:50:45.897980 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 16 04:50:45.897989 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 16 04:50:45.897998 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 16 04:50:45.898011 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 16 04:50:45.898021 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 16 04:50:45.898034 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 16 04:50:45.898043 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 16 04:50:45.898053 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 16 04:50:45.898063 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 16 04:50:45.898083 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 16 04:50:45.898093 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 16 04:50:45.898104 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 16 04:50:45.898113 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 16 04:50:45.898123 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 16 04:50:45.898133 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 16 04:50:45.898146 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 16 04:50:45.898156 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 16 04:50:45.898166 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 04:50:45.898176 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:50:45.898186 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 16 04:50:45.898196 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 04:50:45.898209 kernel: efi: EFI v2.7 by EDK II Sep 16 04:50:45.898220 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 16 04:50:45.898229 kernel: random: crng init done Sep 16 04:50:45.898242 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 16 04:50:45.898252 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 16 04:50:45.898267 kernel: secureboot: Secure boot disabled Sep 16 04:50:45.898277 kernel: SMBIOS 2.8 present. Sep 16 04:50:45.898287 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 16 04:50:45.898297 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:50:45.898307 kernel: Hypervisor detected: KVM Sep 16 04:50:45.898317 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:50:45.898327 kernel: kvm-clock: using sched offset of 5451167446 cycles Sep 16 04:50:45.898338 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:50:45.898348 kernel: tsc: Detected 2794.750 MHz processor Sep 16 04:50:45.898359 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:50:45.898369 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:50:45.898382 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 16 04:50:45.898392 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 04:50:45.898403 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:50:45.898413 kernel: Using GB pages for direct mapping Sep 16 04:50:45.898423 kernel: ACPI: Early table checksum verification disabled Sep 16 04:50:45.898433 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 16 04:50:45.898444 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 16 04:50:45.898454 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:50:45.898464 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:50:45.898477 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 16 04:50:45.898487 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:50:45.898497 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:50:45.898508 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:50:45.898518 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:50:45.898528 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 16 04:50:45.898539 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 16 04:50:45.898549 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 16 04:50:45.898562 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 16 04:50:45.898572 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 16 04:50:45.898582 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 16 04:50:45.898593 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 16 04:50:45.898603 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 16 04:50:45.898613 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 16 04:50:45.898623 kernel: No NUMA configuration found Sep 16 04:50:45.898633 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 16 04:50:45.898644 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 16 04:50:45.898654 kernel: Zone ranges: Sep 16 04:50:45.898667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:50:45.898677 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 16 04:50:45.898687 kernel: Normal empty Sep 16 04:50:45.898714 kernel: Device empty Sep 16 04:50:45.898724 kernel: Movable zone start for each node Sep 16 04:50:45.898734 kernel: Early memory node ranges Sep 16 04:50:45.898745 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 16 04:50:45.898755 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 16 04:50:45.898769 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 16 04:50:45.898783 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 16 04:50:45.898793 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 16 04:50:45.898804 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 16 04:50:45.898814 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 16 04:50:45.898824 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 16 04:50:45.898834 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 16 04:50:45.898844 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:50:45.898857 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 16 04:50:45.898879 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 16 04:50:45.898889 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:50:45.898900 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 16 04:50:45.898910 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 16 04:50:45.898923 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 16 04:50:45.898934 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 16 04:50:45.898944 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 16 04:50:45.898955 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 16 04:50:45.898966 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:50:45.898979 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:50:45.898989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 16 04:50:45.899000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:50:45.899010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:50:45.899021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:50:45.899031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:50:45.899042 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:50:45.899053 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 04:50:45.899063 kernel: TSC deadline timer available Sep 16 04:50:45.899086 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:50:45.899096 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:50:45.899107 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:50:45.899117 kernel: CPU topo: Max. threads per core: 1 Sep 16 04:50:45.899128 kernel: CPU topo: Num. cores per package: 4 Sep 16 04:50:45.899138 kernel: CPU topo: Num. threads per package: 4 Sep 16 04:50:45.899149 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 16 04:50:45.899159 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 04:50:45.899170 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 16 04:50:45.899183 kernel: kvm-guest: setup PV sched yield Sep 16 04:50:45.899194 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 16 04:50:45.899204 kernel: Booting paravirtualized kernel on KVM Sep 16 04:50:45.899215 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:50:45.899226 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 16 04:50:45.899236 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 16 04:50:45.899247 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 16 04:50:45.899258 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 16 04:50:45.899268 kernel: kvm-guest: PV spinlocks enabled Sep 16 04:50:45.899282 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:50:45.899294 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:50:45.899308 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:50:45.899319 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:50:45.899330 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:50:45.899340 kernel: Fallback order for Node 0: 0 Sep 16 04:50:45.899351 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 16 04:50:45.899361 kernel: Policy zone: DMA32 Sep 16 04:50:45.899375 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:50:45.899386 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 16 04:50:45.899396 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:50:45.899407 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:50:45.899418 kernel: Dynamic Preempt: voluntary Sep 16 04:50:45.899428 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:50:45.899440 kernel: rcu: RCU event tracing is enabled. Sep 16 04:50:45.899450 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 16 04:50:45.899461 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:50:45.899472 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:50:45.899485 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:50:45.899496 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:50:45.899509 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 16 04:50:45.899520 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:50:45.899531 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:50:45.899542 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:50:45.899552 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 16 04:50:45.899563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:50:45.899576 kernel: Console: colour dummy device 80x25 Sep 16 04:50:45.899587 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:50:45.899597 kernel: ACPI: Core revision 20240827 Sep 16 04:50:45.899608 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 16 04:50:45.899619 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:50:45.899629 kernel: x2apic enabled Sep 16 04:50:45.899640 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:50:45.899650 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 16 04:50:45.899661 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 16 04:50:45.899674 kernel: kvm-guest: setup PV IPIs Sep 16 04:50:45.899685 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 16 04:50:45.899717 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 16 04:50:45.899729 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 16 04:50:45.899740 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 04:50:45.899750 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 16 04:50:45.899761 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 16 04:50:45.899772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:50:45.899782 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:50:45.899796 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:50:45.899807 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 16 04:50:45.899817 kernel: active return thunk: retbleed_return_thunk Sep 16 04:50:45.899828 kernel: RETBleed: Mitigation: untrained return thunk Sep 16 04:50:45.899841 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 04:50:45.899852 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 04:50:45.899863 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 16 04:50:45.899875 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 16 04:50:45.899886 kernel: active return thunk: srso_return_thunk Sep 16 04:50:45.899899 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 16 04:50:45.899909 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:50:45.899920 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:50:45.899931 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:50:45.899941 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:50:45.899952 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 16 04:50:45.899962 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:50:45.899973 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:50:45.899983 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:50:45.899997 kernel: landlock: Up and running. Sep 16 04:50:45.900007 kernel: SELinux: Initializing. Sep 16 04:50:45.900018 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:50:45.900028 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:50:45.900039 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 16 04:50:45.900050 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 16 04:50:45.900060 kernel: ... version: 0 Sep 16 04:50:45.900079 kernel: ... bit width: 48 Sep 16 04:50:45.900090 kernel: ... generic registers: 6 Sep 16 04:50:45.900103 kernel: ... value mask: 0000ffffffffffff Sep 16 04:50:45.900114 kernel: ... max period: 00007fffffffffff Sep 16 04:50:45.900125 kernel: ... fixed-purpose events: 0 Sep 16 04:50:45.900135 kernel: ... event mask: 000000000000003f Sep 16 04:50:45.900145 kernel: signal: max sigframe size: 1776 Sep 16 04:50:45.900156 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:50:45.900167 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:50:45.900180 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:50:45.900191 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:50:45.900204 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:50:45.900215 kernel: .... node #0, CPUs: #1 #2 #3 Sep 16 04:50:45.900225 kernel: smp: Brought up 1 node, 4 CPUs Sep 16 04:50:45.900236 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 16 04:50:45.900247 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 137196K reserved, 0K cma-reserved) Sep 16 04:50:45.900257 kernel: devtmpfs: initialized Sep 16 04:50:45.900268 kernel: x86/mm: Memory block size: 128MB Sep 16 04:50:45.900279 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 16 04:50:45.900289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 16 04:50:45.900303 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 16 04:50:45.900313 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 16 04:50:45.900324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 16 04:50:45.900334 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 16 04:50:45.900345 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:50:45.900356 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 16 04:50:45.900366 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:50:45.900377 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:50:45.900390 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:50:45.900401 kernel: audit: type=2000 audit(1757998242.667:1): state=initialized audit_enabled=0 res=1 Sep 16 04:50:45.900411 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:50:45.900422 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:50:45.900432 kernel: cpuidle: using governor menu Sep 16 04:50:45.900442 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:50:45.900453 kernel: dca service started, version 1.12.1 Sep 16 04:50:45.900464 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 16 04:50:45.900474 kernel: PCI: Using configuration type 1 for base access Sep 16 04:50:45.900487 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:50:45.900498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:50:45.900508 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:50:45.900519 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:50:45.900530 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:50:45.900540 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:50:45.900551 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:50:45.900561 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:50:45.900572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:50:45.900585 kernel: ACPI: Interpreter enabled Sep 16 04:50:45.900596 kernel: ACPI: PM: (supports S0 S3 S5) Sep 16 04:50:45.900606 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:50:45.900617 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:50:45.900628 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 04:50:45.900639 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 16 04:50:45.900649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:50:45.900936 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:50:45.901131 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 16 04:50:45.901347 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 16 04:50:45.901365 kernel: PCI host bridge to bus 0000:00 Sep 16 04:50:45.901534 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:50:45.901674 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:50:45.901840 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:50:45.901979 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 16 04:50:45.902136 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 16 04:50:45.902273 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 16 04:50:45.902415 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:50:45.902608 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:50:45.902834 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 16 04:50:45.902983 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 16 04:50:45.903187 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 16 04:50:45.903365 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 16 04:50:45.903504 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 04:50:45.903653 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 04:50:45.903820 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 16 04:50:45.903986 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 16 04:50:45.904158 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 16 04:50:45.904382 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 16 04:50:45.904550 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 16 04:50:45.904725 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 16 04:50:45.904887 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 16 04:50:45.905062 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 16 04:50:45.905229 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 16 04:50:45.905383 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 16 04:50:45.905611 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 16 04:50:45.905835 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 16 04:50:45.906033 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:50:45.906203 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 16 04:50:45.906406 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 16 04:50:45.906562 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 16 04:50:45.906768 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 16 04:50:45.907000 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 16 04:50:45.907171 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 16 04:50:45.907186 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:50:45.907197 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:50:45.907209 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:50:45.907220 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:50:45.907232 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 16 04:50:45.907243 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 16 04:50:45.907259 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 16 04:50:45.907270 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 16 04:50:45.907281 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 16 04:50:45.907292 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 16 04:50:45.907303 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 16 04:50:45.907314 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 16 04:50:45.907325 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 16 04:50:45.907336 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 16 04:50:45.907347 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 16 04:50:45.907362 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 16 04:50:45.907373 kernel: iommu: Default domain type: Translated Sep 16 04:50:45.907384 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:50:45.907395 kernel: efivars: Registered efivars operations Sep 16 04:50:45.907406 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:50:45.907417 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:50:45.907429 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 16 04:50:45.907440 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 16 04:50:45.907451 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 16 04:50:45.907465 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 16 04:50:45.907475 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 16 04:50:45.907487 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 16 04:50:45.907498 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 16 04:50:45.907509 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 16 04:50:45.907677 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 16 04:50:45.907864 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 16 04:50:45.908023 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 04:50:45.908044 kernel: vgaarb: loaded Sep 16 04:50:45.908056 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 16 04:50:45.908067 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 16 04:50:45.908089 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:50:45.908100 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:50:45.908112 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:50:45.908123 kernel: pnp: PnP ACPI init Sep 16 04:50:45.908326 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 16 04:50:45.908350 kernel: pnp: PnP ACPI: found 6 devices Sep 16 04:50:45.908362 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:50:45.908374 kernel: NET: Registered PF_INET protocol family Sep 16 04:50:45.908386 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:50:45.908398 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:50:45.908410 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:50:45.908422 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:50:45.908434 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:50:45.908451 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:50:45.908463 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:50:45.908474 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:50:45.908486 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:50:45.908498 kernel: NET: Registered PF_XDP protocol family Sep 16 04:50:45.908658 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 16 04:50:45.908842 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 16 04:50:45.908989 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:50:45.909143 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:50:45.909292 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:50:45.909433 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 16 04:50:45.909578 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 16 04:50:45.909740 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 16 04:50:45.909758 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:50:45.909770 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 16 04:50:45.909783 kernel: Initialise system trusted keyrings Sep 16 04:50:45.909799 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:50:45.909811 kernel: Key type asymmetric registered Sep 16 04:50:45.909823 kernel: Asymmetric key parser 'x509' registered Sep 16 04:50:45.909834 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:50:45.909846 kernel: io scheduler mq-deadline registered Sep 16 04:50:45.909858 kernel: io scheduler kyber registered Sep 16 04:50:45.909869 kernel: io scheduler bfq registered Sep 16 04:50:45.909884 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:50:45.909897 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 16 04:50:45.909909 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 16 04:50:45.909920 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 16 04:50:45.909932 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:50:45.909944 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:50:45.909955 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:50:45.909967 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:50:45.909979 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:50:45.910164 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 16 04:50:45.910317 kernel: rtc_cmos 00:04: registered as rtc0 Sep 16 04:50:45.910468 kernel: rtc_cmos 00:04: setting system clock to 2025-09-16T04:50:45 UTC (1757998245) Sep 16 04:50:45.910485 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:50:45.910629 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 16 04:50:45.910644 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 16 04:50:45.910656 kernel: efifb: probing for efifb Sep 16 04:50:45.910673 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 16 04:50:45.910685 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 16 04:50:45.910696 kernel: efifb: scrolling: redraw Sep 16 04:50:45.910724 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:50:45.910736 kernel: Console: switching to colour frame buffer device 160x50 Sep 16 04:50:45.910748 kernel: fb0: EFI VGA frame buffer device Sep 16 04:50:45.910759 kernel: pstore: Using crash dump compression: deflate Sep 16 04:50:45.910771 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:50:45.910783 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:50:45.910794 kernel: Segment Routing with IPv6 Sep 16 04:50:45.910810 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:50:45.910822 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:50:45.910833 kernel: Key type dns_resolver registered Sep 16 04:50:45.910845 kernel: IPI shorthand broadcast: enabled Sep 16 04:50:45.910857 kernel: sched_clock: Marking stable (3579004768, 159708519)->(3771167578, -32454291) Sep 16 04:50:45.910868 kernel: registered taskstats version 1 Sep 16 04:50:45.910880 kernel: Loading compiled-in X.509 certificates Sep 16 04:50:45.910892 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:50:45.910903 kernel: Demotion targets for Node 0: null Sep 16 04:50:45.910918 kernel: Key type .fscrypt registered Sep 16 04:50:45.910929 kernel: Key type fscrypt-provisioning registered Sep 16 04:50:45.910941 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:50:45.910952 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:50:45.910964 kernel: ima: No architecture policies found Sep 16 04:50:45.910976 kernel: clk: Disabling unused clocks Sep 16 04:50:45.910987 kernel: Warning: unable to open an initial console. Sep 16 04:50:45.910999 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:50:45.911014 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:50:45.911026 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:50:45.911037 kernel: Run /init as init process Sep 16 04:50:45.911049 kernel: with arguments: Sep 16 04:50:45.911060 kernel: /init Sep 16 04:50:45.911081 kernel: with environment: Sep 16 04:50:45.911093 kernel: HOME=/ Sep 16 04:50:45.911104 kernel: TERM=linux Sep 16 04:50:45.911116 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:50:45.911129 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:50:45.911148 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:50:45.911161 systemd[1]: Detected virtualization kvm. Sep 16 04:50:45.911173 systemd[1]: Detected architecture x86-64. Sep 16 04:50:45.911185 systemd[1]: Running in initrd. Sep 16 04:50:45.911197 systemd[1]: No hostname configured, using default hostname. Sep 16 04:50:45.911210 systemd[1]: Hostname set to . Sep 16 04:50:45.911225 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:50:45.911237 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:50:45.911249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:50:45.911262 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:50:45.911275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:50:45.911287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:50:45.911299 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:50:45.911313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:50:45.911330 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:50:45.911343 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:50:45.911355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:50:45.911368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:50:45.911380 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:50:45.911393 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:50:45.911406 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:50:45.911421 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:50:45.911436 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:50:45.911449 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:50:45.911464 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:50:45.911477 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:50:45.911492 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:50:45.911506 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:50:45.911522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:50:45.911535 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:50:45.911552 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:50:45.911564 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:50:45.911577 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:50:45.911590 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:50:45.911602 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:50:45.911615 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:50:45.911627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:50:45.911639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:50:45.911652 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:50:45.911669 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:50:45.911681 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:50:45.911754 systemd-journald[220]: Collecting audit messages is disabled. Sep 16 04:50:45.911790 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:50:45.911801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:50:45.911813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:50:45.911824 systemd-journald[220]: Journal started Sep 16 04:50:45.911850 systemd-journald[220]: Runtime Journal (/run/log/journal/984e6d2952d042f8a098aa41e8d08bd0) is 6M, max 48.4M, 42.4M free. Sep 16 04:50:45.903943 systemd-modules-load[222]: Inserted module 'overlay' Sep 16 04:50:45.917830 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:50:45.919627 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:50:45.926634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:50:45.927848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:50:45.937368 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:50:45.939121 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 16 04:50:45.939194 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:50:45.940541 kernel: Bridge firewalling registered Sep 16 04:50:45.941241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:50:45.944871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:50:45.945155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:50:45.948289 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:50:45.960639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:50:45.961234 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:50:45.965333 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:50:45.967966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:50:45.999526 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:50:46.023043 systemd-resolved[262]: Positive Trust Anchors: Sep 16 04:50:46.023059 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:50:46.023109 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:50:46.026246 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 16 04:50:46.033255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:50:46.036792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:50:46.115732 kernel: SCSI subsystem initialized Sep 16 04:50:46.128724 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:50:46.140737 kernel: iscsi: registered transport (tcp) Sep 16 04:50:46.171102 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:50:46.171211 kernel: QLogic iSCSI HBA Driver Sep 16 04:50:46.198637 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:50:46.229857 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:50:46.231481 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:50:46.304099 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:50:46.306156 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:50:46.377759 kernel: raid6: avx2x4 gen() 25970 MB/s Sep 16 04:50:46.394750 kernel: raid6: avx2x2 gen() 27177 MB/s Sep 16 04:50:46.412051 kernel: raid6: avx2x1 gen() 20947 MB/s Sep 16 04:50:46.412158 kernel: raid6: using algorithm avx2x2 gen() 27177 MB/s Sep 16 04:50:46.429855 kernel: raid6: .... xor() 15019 MB/s, rmw enabled Sep 16 04:50:46.429956 kernel: raid6: using avx2x2 recovery algorithm Sep 16 04:50:46.455745 kernel: xor: automatically using best checksumming function avx Sep 16 04:50:46.668741 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:50:46.678116 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:50:46.680124 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:50:46.734821 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 16 04:50:46.741756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:50:46.745293 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:50:46.791552 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 16 04:50:46.830992 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:50:46.833829 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:50:46.923984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:50:46.927529 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:50:46.973740 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 16 04:50:46.982354 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 16 04:50:46.982685 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:50:46.986727 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 16 04:50:46.994918 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:50:46.994959 kernel: GPT:9289727 != 19775487 Sep 16 04:50:46.994970 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:50:46.994980 kernel: GPT:9289727 != 19775487 Sep 16 04:50:46.994990 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:50:46.995000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:50:46.996737 kernel: libata version 3.00 loaded. Sep 16 04:50:47.000723 kernel: AES CTR mode by8 optimization enabled Sep 16 04:50:47.009736 kernel: ahci 0000:00:1f.2: version 3.0 Sep 16 04:50:47.011734 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 16 04:50:47.017340 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 16 04:50:47.017564 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 16 04:50:47.017733 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 16 04:50:47.024733 kernel: scsi host0: ahci Sep 16 04:50:47.030216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:50:47.030693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:50:47.034574 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:50:47.037598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:50:47.041865 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:50:47.048745 kernel: scsi host1: ahci Sep 16 04:50:47.062735 kernel: scsi host2: ahci Sep 16 04:50:47.064723 kernel: scsi host3: ahci Sep 16 04:50:47.066732 kernel: scsi host4: ahci Sep 16 04:50:47.073523 kernel: scsi host5: ahci Sep 16 04:50:47.073860 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 16 04:50:47.073880 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 16 04:50:47.073895 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 16 04:50:47.073919 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 16 04:50:47.073675 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 16 04:50:47.080071 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 16 04:50:47.080096 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 16 04:50:47.086069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:50:47.105257 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 16 04:50:47.112479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 16 04:50:47.113798 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 16 04:50:47.125077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:50:47.126247 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:50:47.167630 disk-uuid[633]: Primary Header is updated. Sep 16 04:50:47.167630 disk-uuid[633]: Secondary Entries is updated. Sep 16 04:50:47.167630 disk-uuid[633]: Secondary Header is updated. Sep 16 04:50:47.172737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:50:47.176729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:50:47.385754 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 16 04:50:47.385843 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 16 04:50:47.386734 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 16 04:50:47.393769 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 16 04:50:47.393862 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 16 04:50:47.394734 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 16 04:50:47.395744 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 04:50:47.397045 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 16 04:50:47.397068 kernel: ata3.00: applying bridge limits Sep 16 04:50:47.398189 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 04:50:47.398207 kernel: ata3.00: configured for UDMA/100 Sep 16 04:50:47.398730 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 16 04:50:47.463836 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 16 04:50:47.464263 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:50:47.477771 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 16 04:50:47.903852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:50:47.908130 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:50:47.909823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:50:47.910997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:50:47.914338 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:50:47.943547 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:50:48.200772 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:50:48.201747 disk-uuid[634]: The operation has completed successfully. Sep 16 04:50:48.233175 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:50:48.233351 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:50:48.280220 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:50:48.303909 sh[663]: Success Sep 16 04:50:48.326151 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:50:48.326223 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:50:48.327483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:50:48.338829 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 04:50:48.373567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:50:48.378435 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:50:48.408293 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:50:48.417037 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (675) Sep 16 04:50:48.417078 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:50:48.418525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:50:48.423736 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:50:48.423771 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:50:48.425151 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:50:48.425730 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:50:48.426109 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:50:48.427006 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:50:48.428034 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:50:48.456605 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 16 04:50:48.456661 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:50:48.456674 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:50:48.460390 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:50:48.460434 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:50:48.465724 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:50:48.466074 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:50:48.469356 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:50:48.573897 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:50:48.586476 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:50:48.667388 ignition[753]: Ignition 2.22.0 Sep 16 04:50:48.667818 ignition[753]: Stage: fetch-offline Sep 16 04:50:48.668405 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:50:48.668420 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:50:48.668555 ignition[753]: parsed url from cmdline: "" Sep 16 04:50:48.670731 systemd-networkd[850]: lo: Link UP Sep 16 04:50:48.668564 ignition[753]: no config URL provided Sep 16 04:50:48.670736 systemd-networkd[850]: lo: Gained carrier Sep 16 04:50:48.668571 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:50:48.672485 systemd-networkd[850]: Enumeration completed Sep 16 04:50:48.668596 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:50:48.672799 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:50:48.668630 ignition[753]: op(1): [started] loading QEMU firmware config module Sep 16 04:50:48.673509 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:50:48.668639 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 16 04:50:48.673513 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:50:48.680490 ignition[753]: op(1): [finished] loading QEMU firmware config module Sep 16 04:50:48.673951 systemd-networkd[850]: eth0: Link UP Sep 16 04:50:48.675350 systemd[1]: Reached target network.target - Network. Sep 16 04:50:48.675655 systemd-networkd[850]: eth0: Gained carrier Sep 16 04:50:48.675668 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:50:48.697785 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:50:48.729448 ignition[753]: parsing config with SHA512: f515a6534ee8201c9509b1268b63861cf416eb132e497af1187e62842b1307f724ef9f49f918ef77b50227c9fd0134279baa39f70789aea721f2de764a7f4076 Sep 16 04:50:48.733915 unknown[753]: fetched base config from "system" Sep 16 04:50:48.733930 unknown[753]: fetched user config from "qemu" Sep 16 04:50:48.734339 ignition[753]: fetch-offline: fetch-offline passed Sep 16 04:50:48.734412 ignition[753]: Ignition finished successfully Sep 16 04:50:48.741794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:50:48.742084 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 16 04:50:48.743092 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:50:48.800040 ignition[860]: Ignition 2.22.0 Sep 16 04:50:48.800059 ignition[860]: Stage: kargs Sep 16 04:50:48.800218 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:50:48.800230 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:50:48.800978 ignition[860]: kargs: kargs passed Sep 16 04:50:48.801049 ignition[860]: Ignition finished successfully Sep 16 04:50:48.809920 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:50:48.812258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:50:48.919360 ignition[868]: Ignition 2.22.0 Sep 16 04:50:48.919378 ignition[868]: Stage: disks Sep 16 04:50:48.919556 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:50:48.919568 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:50:48.923848 ignition[868]: disks: disks passed Sep 16 04:50:48.923961 ignition[868]: Ignition finished successfully Sep 16 04:50:48.929404 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:50:48.931750 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:50:48.931839 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:50:48.934088 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:50:48.934451 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:50:48.934968 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:50:48.936510 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:50:48.968418 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:50:48.976276 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:50:48.979042 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:50:49.093726 kernel: EXT4-fs (vda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:50:49.094233 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:50:49.095844 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:50:49.098655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:50:49.100565 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:50:49.102094 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:50:49.102150 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:50:49.102183 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:50:49.116292 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:50:49.118110 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:50:49.122106 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 16 04:50:49.122129 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:50:49.124171 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:50:49.127308 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:50:49.127337 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:50:49.129470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:50:49.167019 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:50:49.171094 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:50:49.174812 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:50:49.178358 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:50:49.264874 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:50:49.266256 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:50:49.269045 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:50:49.289765 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:50:49.301982 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:50:49.318676 ignition[1001]: INFO : Ignition 2.22.0 Sep 16 04:50:49.318676 ignition[1001]: INFO : Stage: mount Sep 16 04:50:49.320626 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:50:49.320626 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:50:49.320626 ignition[1001]: INFO : mount: mount passed Sep 16 04:50:49.320626 ignition[1001]: INFO : Ignition finished successfully Sep 16 04:50:49.322854 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:50:49.325793 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:50:49.416278 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:50:49.417943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:50:49.472720 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 16 04:50:49.474748 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:50:49.474771 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:50:49.477722 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:50:49.477740 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:50:49.479426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:50:49.515161 ignition[1030]: INFO : Ignition 2.22.0 Sep 16 04:50:49.515161 ignition[1030]: INFO : Stage: files Sep 16 04:50:49.521388 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:50:49.521388 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:50:49.521388 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:50:49.521388 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:50:49.521388 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:50:49.528081 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:50:49.528081 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:50:49.531615 unknown[1030]: wrote ssh authorized keys file for user: core Sep 16 04:50:49.533019 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:50:49.534412 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:50:49.534412 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 16 04:50:49.585902 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:50:50.124395 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:50:50.124395 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:50:50.128580 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:50:50.128580 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:50:50.128580 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:50:50.128580 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:50:50.128580 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:50:50.128580 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:50:50.128580 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:50:50.141082 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:50:50.141082 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:50:50.141082 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:50:50.141082 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:50:50.141082 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:50:50.141082 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 16 04:50:50.170909 systemd-networkd[850]: eth0: Gained IPv6LL Sep 16 04:50:50.575579 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:50:51.397255 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:50:51.397255 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:50:51.401339 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:50:51.408179 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:50:51.408179 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:50:51.408179 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 16 04:50:51.412540 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:50:51.412540 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:50:51.412540 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 16 04:50:51.418111 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 16 04:50:51.439658 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:50:51.445731 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:50:51.447474 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 16 04:50:51.447474 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:50:51.447474 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:50:51.447474 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:50:51.447474 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:50:51.447474 ignition[1030]: INFO : files: files passed Sep 16 04:50:51.447474 ignition[1030]: INFO : Ignition finished successfully Sep 16 04:50:51.454670 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:50:51.459562 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:50:51.463513 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:50:51.479615 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:50:51.479838 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:50:51.483434 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 16 04:50:51.487377 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:50:51.487377 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:50:51.490794 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:50:51.494268 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:50:51.497059 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:50:51.498107 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:50:51.576178 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:50:51.576333 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:50:51.577789 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:50:51.580248 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:50:51.580741 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:50:51.585110 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:50:51.629985 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:50:51.633873 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:50:51.673728 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:50:51.676344 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:50:51.678436 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:50:51.680765 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:50:51.680887 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:50:51.684107 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:50:51.685294 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:50:51.687452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:50:51.689758 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:50:51.691938 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:50:51.694322 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:50:51.696798 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:50:51.699066 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:50:51.701564 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:50:51.703733 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:50:51.706026 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:50:51.707885 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:50:51.708072 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:50:51.710407 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:50:51.711987 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:50:51.714206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:50:51.714379 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:50:51.716432 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:50:51.716543 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:50:51.719136 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:50:51.719250 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:50:51.721071 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:50:51.722874 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:50:51.727508 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:50:51.729041 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:50:51.731283 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:50:51.733295 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:50:51.733395 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:50:51.735591 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:50:51.735686 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:50:51.737345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:50:51.737511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:50:51.739496 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:50:51.739612 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:50:51.743382 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:50:51.743480 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:50:51.743621 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:50:51.746093 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:50:51.749507 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:50:51.750171 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:50:51.754349 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:50:51.754507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:50:51.767329 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:50:51.767467 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:50:51.793947 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:50:51.882800 ignition[1085]: INFO : Ignition 2.22.0 Sep 16 04:50:51.882800 ignition[1085]: INFO : Stage: umount Sep 16 04:50:51.886430 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:50:51.886430 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:50:51.886430 ignition[1085]: INFO : umount: umount passed Sep 16 04:50:51.886430 ignition[1085]: INFO : Ignition finished successfully Sep 16 04:50:51.889316 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:50:51.889471 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:50:51.891609 systemd[1]: Stopped target network.target - Network. Sep 16 04:50:51.893386 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:50:51.893479 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:50:51.895663 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:50:51.895748 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:50:51.896863 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:50:51.896935 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:50:51.897024 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:50:51.897077 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:50:51.897280 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:50:51.897392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:50:51.908261 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:50:51.908435 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:50:51.912935 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:50:51.913186 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:50:51.913343 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:50:51.917057 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:50:51.917816 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:50:51.919119 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:50:51.919163 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:50:51.922135 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:50:51.923082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:50:51.923140 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:50:51.925504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:50:51.925559 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:50:51.929992 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:50:51.930046 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:50:51.932535 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:50:51.932597 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:50:51.935885 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:50:51.938836 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:50:51.938940 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:50:51.951449 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:50:51.951648 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:50:51.954576 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:50:51.954730 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:50:51.956962 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:50:51.957037 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:50:51.958686 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:50:51.958750 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:50:51.961230 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:50:51.961315 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:50:51.963350 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:50:51.963406 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:50:51.965841 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:50:51.965901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:50:51.969002 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:50:51.970172 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:50:51.970228 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:50:51.972605 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:50:51.972655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:50:51.977301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:50:51.977366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:50:51.982154 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:50:51.982231 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:50:51.982297 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:50:51.982736 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:50:51.982861 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:50:51.984380 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:50:51.984478 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:50:51.988344 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:50:51.988470 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:50:51.989130 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:50:51.992978 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:50:52.021872 systemd[1]: Switching root. Sep 16 04:50:52.066136 systemd-journald[220]: Journal stopped Sep 16 04:50:53.400095 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 16 04:50:53.400225 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:50:53.400261 kernel: SELinux: policy capability open_perms=1 Sep 16 04:50:53.400278 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:50:53.400293 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:50:53.400312 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:50:53.400337 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:50:53.400353 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:50:53.400377 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:50:53.400393 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:50:53.400416 kernel: audit: type=1403 audit(1757998252.504:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:50:53.400439 systemd[1]: Successfully loaded SELinux policy in 70.761ms. Sep 16 04:50:53.400472 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.372ms. Sep 16 04:50:53.400494 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:50:53.400512 systemd[1]: Detected virtualization kvm. Sep 16 04:50:53.400534 systemd[1]: Detected architecture x86-64. Sep 16 04:50:53.400563 systemd[1]: Detected first boot. Sep 16 04:50:53.400579 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:50:53.400595 zram_generator::config[1130]: No configuration found. Sep 16 04:50:53.400619 kernel: Guest personality initialized and is inactive Sep 16 04:50:53.400635 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 04:50:53.400664 kernel: Initialized host personality Sep 16 04:50:53.400679 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:50:53.400696 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:50:53.400742 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:50:53.400760 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:50:53.400777 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:50:53.400797 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:50:53.400817 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:50:53.400836 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:50:53.400852 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:50:53.400869 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:50:53.400907 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:50:53.400926 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:50:53.400948 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:50:53.400966 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:50:53.400982 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:50:53.400999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:50:53.401016 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:50:53.401034 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:50:53.401052 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:50:53.401079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:50:53.401110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:50:53.401127 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:50:53.401148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:50:53.401164 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:50:53.401182 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:50:53.401199 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:50:53.401217 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:50:53.401243 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:50:53.401268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:50:53.401285 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:50:53.401302 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:50:53.401323 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:50:53.401341 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:50:53.401358 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:50:53.401376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:50:53.401398 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:50:53.401422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:50:53.401439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:50:53.401459 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:50:53.401476 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:50:53.401492 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:50:53.401520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:50:53.401537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:50:53.401554 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:50:53.401571 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:50:53.401596 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:50:53.401613 systemd[1]: Reached target machines.target - Containers. Sep 16 04:50:53.401633 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:50:53.401650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:50:53.401670 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:50:53.401689 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:50:53.401726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:50:53.401744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:50:53.401769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:50:53.401785 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:50:53.401802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:50:53.401824 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:50:53.401841 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:50:53.401858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:50:53.401874 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:50:53.401901 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:50:53.401925 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:50:53.401953 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:50:53.401970 kernel: loop: module loaded Sep 16 04:50:53.401987 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:50:53.402004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:50:53.402020 kernel: fuse: init (API version 7.41) Sep 16 04:50:53.402037 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:50:53.402054 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:50:53.402080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:50:53.402101 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:50:53.402119 systemd[1]: Stopped verity-setup.service. Sep 16 04:50:53.402175 systemd-journald[1201]: Collecting audit messages is disabled. Sep 16 04:50:53.402220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:50:53.402249 systemd-journald[1201]: Journal started Sep 16 04:50:53.402280 systemd-journald[1201]: Runtime Journal (/run/log/journal/984e6d2952d042f8a098aa41e8d08bd0) is 6M, max 48.4M, 42.4M free. Sep 16 04:50:53.157377 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:50:53.404861 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:50:53.171212 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 16 04:50:53.171793 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:50:53.408357 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:50:53.410914 kernel: ACPI: bus type drm_connector registered Sep 16 04:50:53.410619 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:50:53.413205 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:50:53.414526 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:50:53.415966 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:50:53.417628 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:50:53.419157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:50:53.421014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:50:53.422821 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:50:53.423122 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:50:53.424838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:50:53.425122 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:50:53.426842 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:50:53.427142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:50:53.428844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:50:53.429113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:50:53.430742 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:50:53.431004 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:50:53.432503 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:50:53.433057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:50:53.434630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:50:53.436221 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:50:53.438208 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:50:53.440147 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:50:53.457642 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:50:53.460486 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:50:53.462844 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:50:53.464049 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:50:53.464083 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:50:53.466061 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:50:53.480854 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:50:53.482149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:50:53.484081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:50:53.488660 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:50:53.491227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:50:53.494552 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:50:53.496316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:50:53.498633 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:50:53.505846 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:50:53.509030 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:50:53.513999 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:50:53.515967 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:50:53.522897 systemd-journald[1201]: Time spent on flushing to /var/log/journal/984e6d2952d042f8a098aa41e8d08bd0 is 34.278ms for 1070 entries. Sep 16 04:50:53.522897 systemd-journald[1201]: System Journal (/var/log/journal/984e6d2952d042f8a098aa41e8d08bd0) is 8M, max 195.6M, 187.6M free. Sep 16 04:50:53.582070 systemd-journald[1201]: Received client request to flush runtime journal. Sep 16 04:50:53.582368 kernel: loop0: detected capacity change from 0 to 110984 Sep 16 04:50:53.582435 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:50:53.522221 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:50:53.527974 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:50:53.531945 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:50:53.553944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:50:53.557075 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:50:53.586146 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:50:53.590026 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:50:53.593733 kernel: loop1: detected capacity change from 0 to 128016 Sep 16 04:50:53.595894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:50:53.603033 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:50:53.626267 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 16 04:50:53.626778 kernel: loop2: detected capacity change from 0 to 221472 Sep 16 04:50:53.626293 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 16 04:50:53.633782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:50:53.661755 kernel: loop3: detected capacity change from 0 to 110984 Sep 16 04:50:53.698752 kernel: loop4: detected capacity change from 0 to 128016 Sep 16 04:50:53.870949 kernel: loop5: detected capacity change from 0 to 221472 Sep 16 04:50:53.878408 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 16 04:50:53.880190 (sd-merge)[1273]: Merged extensions into '/usr'. Sep 16 04:50:53.887300 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:50:53.887487 systemd[1]: Reloading... Sep 16 04:50:53.989738 zram_generator::config[1299]: No configuration found. Sep 16 04:50:54.264670 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:50:54.267383 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:50:54.268067 systemd[1]: Reloading finished in 379 ms. Sep 16 04:50:54.294382 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:50:54.296065 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:50:54.316077 systemd[1]: Starting ensure-sysext.service... Sep 16 04:50:54.319297 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:50:54.336921 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:50:54.336950 systemd[1]: Reloading... Sep 16 04:50:54.377640 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:50:54.377679 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:50:54.378086 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:50:54.378350 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:50:54.379308 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:50:54.379583 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 16 04:50:54.379656 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 16 04:50:54.384857 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:50:54.385215 systemd-tmpfiles[1337]: Skipping /boot Sep 16 04:50:54.406340 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:50:54.406503 systemd-tmpfiles[1337]: Skipping /boot Sep 16 04:50:54.410723 zram_generator::config[1360]: No configuration found. Sep 16 04:50:54.602691 systemd[1]: Reloading finished in 265 ms. Sep 16 04:50:54.625278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:50:54.648965 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:50:54.652868 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:50:54.655933 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:50:54.666870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:50:54.672834 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:50:54.678245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:50:54.678427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:50:54.683845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:50:54.686572 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:50:54.691381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:50:54.692625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:50:54.692755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:50:54.692864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:50:54.706818 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:50:54.720931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:50:54.721169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:50:54.722880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:50:54.723103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:50:54.724744 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:50:54.724974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:50:54.734253 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:50:54.740347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:50:54.740651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:50:54.742252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:50:54.760645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:50:54.764112 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:50:54.772554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:50:54.773894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:50:54.774062 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:50:54.774221 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:50:54.775680 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:50:54.777516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:50:54.777778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:50:54.780727 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:50:54.780964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:50:54.782663 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:50:54.782995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:50:54.784763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:50:54.785130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:50:54.790820 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:50:54.800248 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:50:54.802315 systemd[1]: Finished ensure-sysext.service. Sep 16 04:50:54.806040 augenrules[1447]: No rules Sep 16 04:50:54.809738 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:50:54.810218 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:50:54.812135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:50:54.812347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:50:54.814779 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:50:54.817991 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:50:54.820457 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:50:54.823340 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:50:54.825726 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:50:54.840800 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:50:54.861607 systemd-udevd[1456]: Using default interface naming scheme 'v255'. Sep 16 04:50:54.881590 systemd-resolved[1404]: Positive Trust Anchors: Sep 16 04:50:54.881607 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:50:54.881645 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:50:54.885265 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:50:54.889083 systemd-resolved[1404]: Defaulting to hostname 'linux'. Sep 16 04:50:54.890813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:50:54.899296 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:50:54.903083 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:50:54.918574 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:50:54.920330 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:50:54.921694 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:50:54.923864 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:50:54.925483 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:50:54.926812 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:50:54.928234 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:50:54.928271 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:50:54.929329 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:50:54.930752 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:50:54.932283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:50:54.933767 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:50:54.935947 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:50:54.940034 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:50:54.944885 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:50:54.946997 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:50:54.948716 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:50:54.953296 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:50:54.955233 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:50:54.959643 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:50:54.965251 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:50:54.966477 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:50:54.967794 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:50:54.967839 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:50:54.969738 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:50:54.972830 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:50:54.975972 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:50:54.980625 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:50:54.981974 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:50:54.985987 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:50:54.993284 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:50:54.998949 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:50:55.001803 jq[1495]: false Sep 16 04:50:55.001998 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:50:55.004580 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Refreshing passwd entry cache Sep 16 04:50:55.004598 oslogin_cache_refresh[1497]: Refreshing passwd entry cache Sep 16 04:50:55.005355 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:50:55.015694 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Failure getting users, quitting Sep 16 04:50:55.015694 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:50:55.015685 oslogin_cache_refresh[1497]: Failure getting users, quitting Sep 16 04:50:55.015721 oslogin_cache_refresh[1497]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:50:55.017270 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Refreshing group entry cache Sep 16 04:50:55.017260 oslogin_cache_refresh[1497]: Refreshing group entry cache Sep 16 04:50:55.018209 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Failure getting groups, quitting Sep 16 04:50:55.018209 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:50:55.018196 oslogin_cache_refresh[1497]: Failure getting groups, quitting Sep 16 04:50:55.018207 oslogin_cache_refresh[1497]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:50:55.019383 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:50:55.022217 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:50:55.023032 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:50:55.024946 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:50:55.032408 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:50:55.035403 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:50:55.035780 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:50:55.036218 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:50:55.036534 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:50:55.044488 extend-filesystems[1496]: Found /dev/vda6 Sep 16 04:50:55.044464 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:50:55.047322 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:50:55.048652 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:50:55.056593 extend-filesystems[1496]: Found /dev/vda9 Sep 16 04:50:55.061241 extend-filesystems[1496]: Checking size of /dev/vda9 Sep 16 04:50:55.070813 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:50:55.103952 extend-filesystems[1496]: Resized partition /dev/vda9 Sep 16 04:50:55.110629 extend-filesystems[1536]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:50:55.119767 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 16 04:50:55.119912 jq[1509]: true Sep 16 04:50:55.122212 systemd-networkd[1466]: lo: Link UP Sep 16 04:50:55.122226 systemd-networkd[1466]: lo: Gained carrier Sep 16 04:50:55.123354 tar[1512]: linux-amd64/helm Sep 16 04:50:55.129654 update_engine[1508]: I20250916 04:50:55.129567 1508 main.cc:92] Flatcar Update Engine starting Sep 16 04:50:55.130753 systemd-networkd[1466]: Enumeration completed Sep 16 04:50:55.130902 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:50:55.132600 systemd[1]: Reached target network.target - Network. Sep 16 04:50:55.144490 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:50:55.149975 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:50:55.154581 jq[1538]: true Sep 16 04:50:55.155164 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:50:55.157270 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:50:55.157626 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:50:55.158923 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 16 04:50:55.169498 dbus-daemon[1493]: [system] SELinux support is enabled Sep 16 04:50:55.171291 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:50:55.178151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:50:55.178192 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:50:55.183532 update_engine[1508]: I20250916 04:50:55.182950 1508 update_check_scheduler.cc:74] Next update check in 5m18s Sep 16 04:50:55.181227 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:50:55.181250 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:50:55.184211 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 16 04:50:55.184211 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:50:55.184211 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 16 04:50:55.188464 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Sep 16 04:50:55.190129 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:50:55.190522 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:50:55.198681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:50:55.204952 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:50:55.206834 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:50:55.214444 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:50:55.218727 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:50:55.224741 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 16 04:50:55.230735 kernel: ACPI: button: Power Button [PWRF] Sep 16 04:50:55.237962 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:50:55.249863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:50:55.280275 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:50:55.284313 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:50:55.286599 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:50:55.288563 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:50:55.292868 systemd-logind[1507]: New seat seat0. Sep 16 04:50:55.293689 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:50:55.302212 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:50:55.347764 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:50:55.354265 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:50:55.380055 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 16 04:50:55.380472 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 16 04:50:55.380656 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 16 04:50:55.456342 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:50:55.458053 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:50:55.468116 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:50:55.578138 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:50:55.586187 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:50:55.592223 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:50:55.593581 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:50:55.614494 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:50:55.616340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:50:55.634336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:50:55.634649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:50:55.637797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:50:55.642306 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:50:55.642319 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:50:55.651791 systemd-networkd[1466]: eth0: Link UP Sep 16 04:50:55.660479 systemd-networkd[1466]: eth0: Gained carrier Sep 16 04:50:55.660510 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:50:55.677400 kernel: kvm_amd: TSC scaling supported Sep 16 04:50:55.677474 kernel: kvm_amd: Nested Virtualization enabled Sep 16 04:50:55.677493 kernel: kvm_amd: Nested Paging enabled Sep 16 04:50:55.677510 kernel: kvm_amd: LBR virtualization supported Sep 16 04:50:56.025490 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 16 04:50:56.025679 kernel: kvm_amd: Virtual GIF supported Sep 16 04:50:56.027209 systemd-logind[1507]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 04:50:56.030833 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:50:56.035813 systemd-networkd[1466]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:50:56.038287 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Sep 16 04:50:56.703662 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 16 04:50:56.703756 systemd-timesyncd[1455]: Initial clock synchronization to Tue 2025-09-16 04:50:56.703306 UTC. Sep 16 04:50:56.704714 systemd-resolved[1404]: Clock change detected. Flushing caches. Sep 16 04:50:56.791522 kernel: EDAC MC: Ver: 3.0.0 Sep 16 04:50:56.799672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:50:56.879828 containerd[1571]: time="2025-09-16T04:50:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:50:56.880779 containerd[1571]: time="2025-09-16T04:50:56.880714166Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:50:56.895490 containerd[1571]: time="2025-09-16T04:50:56.895325873Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.363µs" Sep 16 04:50:56.895490 containerd[1571]: time="2025-09-16T04:50:56.895452290Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:50:56.895490 containerd[1571]: time="2025-09-16T04:50:56.895479671Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:50:56.895818 containerd[1571]: time="2025-09-16T04:50:56.895782809Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:50:56.895818 containerd[1571]: time="2025-09-16T04:50:56.895807746Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:50:56.895905 containerd[1571]: time="2025-09-16T04:50:56.895850266Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:50:56.895963 containerd[1571]: time="2025-09-16T04:50:56.895937099Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:50:56.895963 containerd[1571]: time="2025-09-16T04:50:56.895954541Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896328 containerd[1571]: time="2025-09-16T04:50:56.896291142Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896328 containerd[1571]: time="2025-09-16T04:50:56.896310799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896328 containerd[1571]: time="2025-09-16T04:50:56.896321790Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896396 containerd[1571]: time="2025-09-16T04:50:56.896330366Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896762 containerd[1571]: time="2025-09-16T04:50:56.896568733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896888 containerd[1571]: time="2025-09-16T04:50:56.896860800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896923 containerd[1571]: time="2025-09-16T04:50:56.896900054Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:50:56.896923 containerd[1571]: time="2025-09-16T04:50:56.896910454Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:50:56.896968 containerd[1571]: time="2025-09-16T04:50:56.896952222Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:50:56.898853 containerd[1571]: time="2025-09-16T04:50:56.898594922Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:50:56.898853 containerd[1571]: time="2025-09-16T04:50:56.898816888Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:50:56.906038 containerd[1571]: time="2025-09-16T04:50:56.905952718Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:50:56.906173 containerd[1571]: time="2025-09-16T04:50:56.906069266Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:50:56.906263 containerd[1571]: time="2025-09-16T04:50:56.906232512Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:50:56.906263 containerd[1571]: time="2025-09-16T04:50:56.906256247Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:50:56.906327 containerd[1571]: time="2025-09-16T04:50:56.906274100Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:50:56.906327 containerd[1571]: time="2025-09-16T04:50:56.906288998Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:50:56.906327 containerd[1571]: time="2025-09-16T04:50:56.906309537Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:50:56.906469 containerd[1571]: time="2025-09-16T04:50:56.906328502Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:50:56.906469 containerd[1571]: time="2025-09-16T04:50:56.906345925Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:50:56.906469 containerd[1571]: time="2025-09-16T04:50:56.906360302Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:50:56.906469 containerd[1571]: time="2025-09-16T04:50:56.906372635Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:50:56.906469 containerd[1571]: time="2025-09-16T04:50:56.906388705Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:50:56.906691 containerd[1571]: time="2025-09-16T04:50:56.906643473Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:50:56.906691 containerd[1571]: time="2025-09-16T04:50:56.906678929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:50:56.906758 containerd[1571]: time="2025-09-16T04:50:56.906700350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:50:56.906758 containerd[1571]: time="2025-09-16T04:50:56.906728843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:50:56.906758 containerd[1571]: time="2025-09-16T04:50:56.906744753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:50:56.906837 containerd[1571]: time="2025-09-16T04:50:56.906758409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:50:56.906837 containerd[1571]: time="2025-09-16T04:50:56.906774889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:50:56.906837 containerd[1571]: time="2025-09-16T04:50:56.906809294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:50:56.906837 containerd[1571]: time="2025-09-16T04:50:56.906827127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:50:56.906966 containerd[1571]: time="2025-09-16T04:50:56.906842516Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:50:56.906966 containerd[1571]: time="2025-09-16T04:50:56.906857424Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:50:56.906966 containerd[1571]: time="2025-09-16T04:50:56.906958894Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:50:56.907058 containerd[1571]: time="2025-09-16T04:50:56.906981878Z" level=info msg="Start snapshots syncer" Sep 16 04:50:56.907058 containerd[1571]: time="2025-09-16T04:50:56.907014639Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:50:56.907337 containerd[1571]: time="2025-09-16T04:50:56.907287601Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:50:56.907524 containerd[1571]: time="2025-09-16T04:50:56.907359095Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:50:56.907524 containerd[1571]: time="2025-09-16T04:50:56.907489369Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:50:56.907702 containerd[1571]: time="2025-09-16T04:50:56.907651764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:50:56.907748 containerd[1571]: time="2025-09-16T04:50:56.907715784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:50:56.907748 containerd[1571]: time="2025-09-16T04:50:56.907738176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:50:56.907787 containerd[1571]: time="2025-09-16T04:50:56.907751531Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:50:56.907787 containerd[1571]: time="2025-09-16T04:50:56.907766248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:50:56.907846 containerd[1571]: time="2025-09-16T04:50:56.907794221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:50:56.907846 containerd[1571]: time="2025-09-16T04:50:56.907810501Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:50:56.907846 containerd[1571]: time="2025-09-16T04:50:56.907840107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:50:56.907902 containerd[1571]: time="2025-09-16T04:50:56.907855566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:50:56.907902 containerd[1571]: time="2025-09-16T04:50:56.907869021Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:50:56.907938 containerd[1571]: time="2025-09-16T04:50:56.907915899Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:50:56.907976 containerd[1571]: time="2025-09-16T04:50:56.907933472Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:50:56.907976 containerd[1571]: time="2025-09-16T04:50:56.907945424Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:50:56.907976 containerd[1571]: time="2025-09-16T04:50:56.907957627Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:50:56.907976 containerd[1571]: time="2025-09-16T04:50:56.907966093Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:50:56.907976 containerd[1571]: time="2025-09-16T04:50:56.907974940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:50:56.908102 containerd[1571]: time="2025-09-16T04:50:56.907988124Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:50:56.908102 containerd[1571]: time="2025-09-16T04:50:56.908008202Z" level=info msg="runtime interface created" Sep 16 04:50:56.908102 containerd[1571]: time="2025-09-16T04:50:56.908013241Z" level=info msg="created NRI interface" Sep 16 04:50:56.908102 containerd[1571]: time="2025-09-16T04:50:56.908034461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:50:56.908102 containerd[1571]: time="2025-09-16T04:50:56.908048417Z" level=info msg="Connect containerd service" Sep 16 04:50:56.908102 containerd[1571]: time="2025-09-16T04:50:56.908075919Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:50:56.909337 containerd[1571]: time="2025-09-16T04:50:56.909311445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:50:57.041038 tar[1512]: linux-amd64/LICENSE Sep 16 04:50:57.041038 tar[1512]: linux-amd64/README.md Sep 16 04:50:57.044147 containerd[1571]: time="2025-09-16T04:50:57.043879230Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:50:57.044147 containerd[1571]: time="2025-09-16T04:50:57.043980370Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:50:57.044147 containerd[1571]: time="2025-09-16T04:50:57.044093452Z" level=info msg="Start subscribing containerd event" Sep 16 04:50:57.044288 containerd[1571]: time="2025-09-16T04:50:57.044193329Z" level=info msg="Start recovering state" Sep 16 04:50:57.045556 containerd[1571]: time="2025-09-16T04:50:57.045509346Z" level=info msg="Start event monitor" Sep 16 04:50:57.045556 containerd[1571]: time="2025-09-16T04:50:57.045539002Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:50:57.045556 containerd[1571]: time="2025-09-16T04:50:57.045558799Z" level=info msg="Start streaming server" Sep 16 04:50:57.045664 containerd[1571]: time="2025-09-16T04:50:57.045582504Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:50:57.045664 containerd[1571]: time="2025-09-16T04:50:57.045590839Z" level=info msg="runtime interface starting up..." Sep 16 04:50:57.045664 containerd[1571]: time="2025-09-16T04:50:57.045598383Z" level=info msg="starting plugins..." Sep 16 04:50:57.045664 containerd[1571]: time="2025-09-16T04:50:57.045614243Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:50:57.046166 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:50:57.047935 containerd[1571]: time="2025-09-16T04:50:57.047897144Z" level=info msg="containerd successfully booted in 0.167196s" Sep 16 04:50:57.058701 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:50:57.935761 systemd-networkd[1466]: eth0: Gained IPv6LL Sep 16 04:50:57.939727 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:50:57.942222 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:50:57.946107 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 16 04:50:57.949596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:50:57.952695 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:50:57.987731 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:50:57.989735 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 16 04:50:57.990008 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 16 04:50:57.992662 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:51:00.125884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:51:00.127806 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:51:00.129621 systemd[1]: Startup finished in 3.652s (kernel) + 6.827s (initrd) + 7.034s (userspace) = 17.514s. Sep 16 04:51:00.140894 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:51:00.349705 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:51:00.351454 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:58152.service - OpenSSH per-connection server daemon (10.0.0.1:58152). Sep 16 04:51:00.450938 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 58152 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:51:00.453108 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:51:00.463029 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:51:00.464556 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:51:00.475420 systemd-logind[1507]: New session 1 of user core. Sep 16 04:51:00.522926 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:51:00.526669 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:51:00.560576 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:51:00.563801 systemd-logind[1507]: New session c1 of user core. Sep 16 04:51:00.837809 systemd[1685]: Queued start job for default target default.target. Sep 16 04:51:00.854321 systemd[1685]: Created slice app.slice - User Application Slice. Sep 16 04:51:00.854361 systemd[1685]: Reached target paths.target - Paths. Sep 16 04:51:00.854424 systemd[1685]: Reached target timers.target - Timers. Sep 16 04:51:00.856483 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:51:00.873021 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:51:00.873181 systemd[1685]: Reached target sockets.target - Sockets. Sep 16 04:51:00.873247 systemd[1685]: Reached target basic.target - Basic System. Sep 16 04:51:00.873291 systemd[1685]: Reached target default.target - Main User Target. Sep 16 04:51:00.873329 systemd[1685]: Startup finished in 299ms. Sep 16 04:51:00.873738 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:51:00.876354 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:51:01.033718 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:58162.service - OpenSSH per-connection server daemon (10.0.0.1:58162). Sep 16 04:51:01.079306 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 58162 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:51:01.081261 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:51:01.087246 systemd-logind[1507]: New session 2 of user core. Sep 16 04:51:01.092593 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:51:01.149778 sshd[1700]: Connection closed by 10.0.0.1 port 58162 Sep 16 04:51:01.150362 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Sep 16 04:51:01.163484 kubelet[1669]: E0916 04:51:01.163367 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:51:01.168487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:51:01.168687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:51:01.169077 systemd[1]: kubelet.service: Consumed 2.949s CPU time, 265.9M memory peak. Sep 16 04:51:01.169557 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:58162.service: Deactivated successfully. Sep 16 04:51:01.171671 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:51:01.172398 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:51:01.176754 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:58164.service - OpenSSH per-connection server daemon (10.0.0.1:58164). Sep 16 04:51:01.178713 systemd-logind[1507]: Removed session 2. Sep 16 04:51:01.224131 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 58164 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:51:01.225508 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:51:01.230646 systemd-logind[1507]: New session 3 of user core. Sep 16 04:51:01.245598 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:51:01.295589 sshd[1710]: Connection closed by 10.0.0.1 port 58164 Sep 16 04:51:01.295910 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 16 04:51:01.316682 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:58164.service: Deactivated successfully. Sep 16 04:51:01.318934 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:51:01.320525 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:51:01.324983 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:58178.service - OpenSSH per-connection server daemon (10.0.0.1:58178). Sep 16 04:51:01.325703 systemd-logind[1507]: Removed session 3. Sep 16 04:51:01.386795 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 58178 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:51:01.388645 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:51:01.395495 systemd-logind[1507]: New session 4 of user core. Sep 16 04:51:01.413624 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:51:01.469652 sshd[1719]: Connection closed by 10.0.0.1 port 58178 Sep 16 04:51:01.470070 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 16 04:51:01.479897 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:58178.service: Deactivated successfully. Sep 16 04:51:01.482152 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:51:01.483164 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:51:01.487099 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:58184.service - OpenSSH per-connection server daemon (10.0.0.1:58184). Sep 16 04:51:01.488015 systemd-logind[1507]: Removed session 4. Sep 16 04:51:01.546182 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 58184 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:51:01.548609 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:51:01.553771 systemd-logind[1507]: New session 5 of user core. Sep 16 04:51:01.563641 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:51:01.627153 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:51:01.627622 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:51:01.650599 sudo[1730]: pam_unix(sudo:session): session closed for user root Sep 16 04:51:01.652677 sshd[1729]: Connection closed by 10.0.0.1 port 58184 Sep 16 04:51:01.653132 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Sep 16 04:51:01.668050 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:58184.service: Deactivated successfully. Sep 16 04:51:01.670824 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:51:01.671814 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:51:01.675559 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:58188.service - OpenSSH per-connection server daemon (10.0.0.1:58188). Sep 16 04:51:01.676558 systemd-logind[1507]: Removed session 5. Sep 16 04:51:01.732563 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 58188 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:51:01.735401 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:51:01.740682 systemd-logind[1507]: New session 6 of user core. Sep 16 04:51:01.756764 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:51:01.815210 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:51:01.815558 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:51:02.028611 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 16 04:51:02.036828 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:51:02.037161 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:51:02.049311 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:51:02.103400 augenrules[1763]: No rules Sep 16 04:51:02.105158 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:51:02.105491 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:51:02.106715 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 16 04:51:02.108288 sshd[1739]: Connection closed by 10.0.0.1 port 58188 Sep 16 04:51:02.108663 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 16 04:51:02.120957 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:58188.service: Deactivated successfully. Sep 16 04:51:02.122711 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:51:02.123448 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:51:02.125841 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). Sep 16 04:51:02.126610 systemd-logind[1507]: Removed session 6. Sep 16 04:51:02.172937 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:51:02.174141 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:51:02.178316 systemd-logind[1507]: New session 7 of user core. Sep 16 04:51:02.193580 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:51:02.246484 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:51:02.246789 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:51:02.893029 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:51:02.937925 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:51:03.833694 dockerd[1796]: time="2025-09-16T04:51:03.833564999Z" level=info msg="Starting up" Sep 16 04:51:03.834615 dockerd[1796]: time="2025-09-16T04:51:03.834574111Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:51:03.861618 dockerd[1796]: time="2025-09-16T04:51:03.861530244Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:51:04.282867 dockerd[1796]: time="2025-09-16T04:51:04.282685049Z" level=info msg="Loading containers: start." Sep 16 04:51:04.295503 kernel: Initializing XFRM netlink socket Sep 16 04:51:04.598479 systemd-networkd[1466]: docker0: Link UP Sep 16 04:51:04.605083 dockerd[1796]: time="2025-09-16T04:51:04.605008395Z" level=info msg="Loading containers: done." Sep 16 04:51:04.630058 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3929088914-merged.mount: Deactivated successfully. Sep 16 04:51:04.632765 dockerd[1796]: time="2025-09-16T04:51:04.632685149Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:51:04.632862 dockerd[1796]: time="2025-09-16T04:51:04.632821425Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:51:04.632973 dockerd[1796]: time="2025-09-16T04:51:04.632941931Z" level=info msg="Initializing buildkit" Sep 16 04:51:04.669102 dockerd[1796]: time="2025-09-16T04:51:04.669040846Z" level=info msg="Completed buildkit initialization" Sep 16 04:51:04.676923 dockerd[1796]: time="2025-09-16T04:51:04.676843466Z" level=info msg="Daemon has completed initialization" Sep 16 04:51:04.677091 dockerd[1796]: time="2025-09-16T04:51:04.676937312Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:51:04.677220 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:51:05.806026 containerd[1571]: time="2025-09-16T04:51:05.805948137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 16 04:51:06.586950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648393050.mount: Deactivated successfully. Sep 16 04:51:08.076971 containerd[1571]: time="2025-09-16T04:51:08.076847022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:08.078101 containerd[1571]: time="2025-09-16T04:51:08.077619390Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 16 04:51:08.079560 containerd[1571]: time="2025-09-16T04:51:08.079511828Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:08.087088 containerd[1571]: time="2025-09-16T04:51:08.086913446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:08.089145 containerd[1571]: time="2025-09-16T04:51:08.089043049Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.283002338s" Sep 16 04:51:08.089145 containerd[1571]: time="2025-09-16T04:51:08.089131054Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 16 04:51:08.090916 containerd[1571]: time="2025-09-16T04:51:08.090820442Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 16 04:51:09.853541 containerd[1571]: time="2025-09-16T04:51:09.853450282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:09.854536 containerd[1571]: time="2025-09-16T04:51:09.854484041Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 16 04:51:09.856248 containerd[1571]: time="2025-09-16T04:51:09.856182195Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:09.859786 containerd[1571]: time="2025-09-16T04:51:09.859713045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:09.861507 containerd[1571]: time="2025-09-16T04:51:09.861461233Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.770589305s" Sep 16 04:51:09.861507 containerd[1571]: time="2025-09-16T04:51:09.861500677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 16 04:51:09.862469 containerd[1571]: time="2025-09-16T04:51:09.862249441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 16 04:51:11.419173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:51:11.421377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:51:11.688714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:51:11.790647 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:51:11.861473 kubelet[2090]: E0916 04:51:11.861381 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:51:11.869669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:51:11.869958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:51:11.870507 systemd[1]: kubelet.service: Consumed 398ms CPU time, 111.1M memory peak. Sep 16 04:51:12.063759 containerd[1571]: time="2025-09-16T04:51:12.063466051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:12.064927 containerd[1571]: time="2025-09-16T04:51:12.064891334Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 16 04:51:12.068498 containerd[1571]: time="2025-09-16T04:51:12.068362562Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:12.072898 containerd[1571]: time="2025-09-16T04:51:12.072815912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:12.074312 containerd[1571]: time="2025-09-16T04:51:12.074216649Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.211910361s" Sep 16 04:51:12.074312 containerd[1571]: time="2025-09-16T04:51:12.074306657Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 16 04:51:12.074940 containerd[1571]: time="2025-09-16T04:51:12.074880724Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 16 04:51:13.437270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250887967.mount: Deactivated successfully. Sep 16 04:51:14.251868 containerd[1571]: time="2025-09-16T04:51:14.251780350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:14.252688 containerd[1571]: time="2025-09-16T04:51:14.252647506Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 16 04:51:14.254105 containerd[1571]: time="2025-09-16T04:51:14.254051018Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:14.256949 containerd[1571]: time="2025-09-16T04:51:14.256895421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:14.257583 containerd[1571]: time="2025-09-16T04:51:14.257538927Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.182622797s" Sep 16 04:51:14.257583 containerd[1571]: time="2025-09-16T04:51:14.257571679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 16 04:51:14.258145 containerd[1571]: time="2025-09-16T04:51:14.258100290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:51:14.865269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254474383.mount: Deactivated successfully. Sep 16 04:51:16.145531 containerd[1571]: time="2025-09-16T04:51:16.145412840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:16.146281 containerd[1571]: time="2025-09-16T04:51:16.146251553Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 16 04:51:16.147591 containerd[1571]: time="2025-09-16T04:51:16.147519290Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:16.150496 containerd[1571]: time="2025-09-16T04:51:16.150465134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:16.151918 containerd[1571]: time="2025-09-16T04:51:16.151888492Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.893737707s" Sep 16 04:51:16.151969 containerd[1571]: time="2025-09-16T04:51:16.151922426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:51:16.152472 containerd[1571]: time="2025-09-16T04:51:16.152383821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:51:16.624973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887134893.mount: Deactivated successfully. Sep 16 04:51:16.637329 containerd[1571]: time="2025-09-16T04:51:16.637246078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:51:16.638155 containerd[1571]: time="2025-09-16T04:51:16.638079941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 16 04:51:16.639444 containerd[1571]: time="2025-09-16T04:51:16.639378887Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:51:16.642691 containerd[1571]: time="2025-09-16T04:51:16.642605657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:51:16.643284 containerd[1571]: time="2025-09-16T04:51:16.643220630Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 490.743814ms" Sep 16 04:51:16.643284 containerd[1571]: time="2025-09-16T04:51:16.643256006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:51:16.643964 containerd[1571]: time="2025-09-16T04:51:16.643920923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 16 04:51:17.252670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230145113.mount: Deactivated successfully. Sep 16 04:51:19.567521 containerd[1571]: time="2025-09-16T04:51:19.567455990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:19.568515 containerd[1571]: time="2025-09-16T04:51:19.568489188Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 16 04:51:19.570115 containerd[1571]: time="2025-09-16T04:51:19.570084960Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:19.573000 containerd[1571]: time="2025-09-16T04:51:19.572948279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:19.573758 containerd[1571]: time="2025-09-16T04:51:19.573730936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.929778043s" Sep 16 04:51:19.573816 containerd[1571]: time="2025-09-16T04:51:19.573758858Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 16 04:51:22.014226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:51:22.016313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:51:22.028306 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:51:22.028453 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:51:22.028838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:51:22.031798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:51:22.060758 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Sep 16 04:51:22.060783 systemd[1]: Reloading... Sep 16 04:51:22.153476 zram_generator::config[2294]: No configuration found. Sep 16 04:51:22.448707 systemd[1]: Reloading finished in 387 ms. Sep 16 04:51:22.528365 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:51:22.528527 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:51:22.528986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:51:22.529050 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.4M memory peak. Sep 16 04:51:22.531312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:51:22.744410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:51:22.749781 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:51:22.791496 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:51:22.791496 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:51:22.791496 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:51:22.791945 kubelet[2339]: I0916 04:51:22.791568 2339 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:51:22.973775 kubelet[2339]: I0916 04:51:22.973717 2339 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:51:22.973775 kubelet[2339]: I0916 04:51:22.973757 2339 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:51:22.974072 kubelet[2339]: I0916 04:51:22.974052 2339 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:51:22.996704 kubelet[2339]: E0916 04:51:22.996570 2339 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:22.998711 kubelet[2339]: I0916 04:51:22.998656 2339 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:51:23.006627 kubelet[2339]: I0916 04:51:23.006593 2339 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:51:23.015272 kubelet[2339]: I0916 04:51:23.014163 2339 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:51:23.015272 kubelet[2339]: I0916 04:51:23.014798 2339 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:51:23.015272 kubelet[2339]: I0916 04:51:23.014977 2339 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:51:23.015481 kubelet[2339]: I0916 04:51:23.015020 2339 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:51:23.015593 kubelet[2339]: I0916 04:51:23.015500 2339 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:51:23.015593 kubelet[2339]: I0916 04:51:23.015515 2339 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:51:23.015688 kubelet[2339]: I0916 04:51:23.015666 2339 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:51:23.019402 kubelet[2339]: I0916 04:51:23.019329 2339 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:51:23.019402 kubelet[2339]: I0916 04:51:23.019368 2339 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:51:23.019402 kubelet[2339]: I0916 04:51:23.019414 2339 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:51:23.019402 kubelet[2339]: I0916 04:51:23.019460 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:51:23.021728 kubelet[2339]: W0916 04:51:23.021662 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:23.022004 kubelet[2339]: W0916 04:51:23.021918 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:23.022004 kubelet[2339]: E0916 04:51:23.021962 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:23.022004 kubelet[2339]: E0916 04:51:23.021965 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:23.023157 kubelet[2339]: I0916 04:51:23.023116 2339 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:51:23.023605 kubelet[2339]: I0916 04:51:23.023585 2339 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:51:23.024135 kubelet[2339]: W0916 04:51:23.024097 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:51:23.025855 kubelet[2339]: I0916 04:51:23.025828 2339 server.go:1274] "Started kubelet" Sep 16 04:51:23.026538 kubelet[2339]: I0916 04:51:23.026503 2339 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:51:23.027366 kubelet[2339]: I0916 04:51:23.026150 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:51:23.027768 kubelet[2339]: I0916 04:51:23.027737 2339 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:51:23.027812 kubelet[2339]: I0916 04:51:23.027802 2339 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:51:23.028378 kubelet[2339]: I0916 04:51:23.028359 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:51:23.031468 kubelet[2339]: I0916 04:51:23.031070 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:51:23.032163 kubelet[2339]: I0916 04:51:23.032143 2339 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:51:23.032265 kubelet[2339]: I0916 04:51:23.032246 2339 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:51:23.032327 kubelet[2339]: I0916 04:51:23.032311 2339 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:51:23.033447 kubelet[2339]: W0916 04:51:23.032707 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:23.033447 kubelet[2339]: E0916 04:51:23.032759 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:23.033447 kubelet[2339]: E0916 04:51:23.032913 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:51:23.033447 kubelet[2339]: E0916 04:51:23.032978 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" Sep 16 04:51:23.033447 kubelet[2339]: E0916 04:51:23.033099 2339 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:51:23.033891 kubelet[2339]: I0916 04:51:23.033864 2339 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:51:23.033891 kubelet[2339]: E0916 04:51:23.032813 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1865aa1fc77fe34a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:51:23.025802058 +0000 UTC m=+0.272152898,LastTimestamp:2025-09-16 04:51:23.025802058 +0000 UTC m=+0.272152898,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:51:23.034005 kubelet[2339]: I0916 04:51:23.033976 2339 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:51:23.035659 kubelet[2339]: I0916 04:51:23.035637 2339 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:51:23.053676 kubelet[2339]: I0916 04:51:23.053640 2339 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:51:23.053676 kubelet[2339]: I0916 04:51:23.053661 2339 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:51:23.053676 kubelet[2339]: I0916 04:51:23.053676 2339 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:51:23.054590 kubelet[2339]: I0916 04:51:23.054560 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:51:23.056285 kubelet[2339]: I0916 04:51:23.056258 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:51:23.056285 kubelet[2339]: I0916 04:51:23.056288 2339 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:51:23.056373 kubelet[2339]: I0916 04:51:23.056311 2339 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:51:23.056373 kubelet[2339]: E0916 04:51:23.056356 2339 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:51:23.057132 kubelet[2339]: W0916 04:51:23.056976 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:23.057132 kubelet[2339]: E0916 04:51:23.057055 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:23.134072 kubelet[2339]: E0916 04:51:23.134002 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:51:23.157302 kubelet[2339]: E0916 04:51:23.157248 2339 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:51:23.234261 kubelet[2339]: E0916 04:51:23.234220 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:51:23.234413 kubelet[2339]: E0916 04:51:23.234267 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" Sep 16 04:51:23.335378 kubelet[2339]: E0916 04:51:23.335323 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:51:23.357562 kubelet[2339]: E0916 04:51:23.357528 2339 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:51:23.436085 kubelet[2339]: E0916 04:51:23.436062 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:51:23.536334 kubelet[2339]: E0916 04:51:23.536265 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:51:23.590244 kubelet[2339]: I0916 04:51:23.590096 2339 policy_none.go:49] "None policy: Start" Sep 16 04:51:23.591006 kubelet[2339]: I0916 04:51:23.590989 2339 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:51:23.591058 kubelet[2339]: I0916 04:51:23.591014 2339 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:51:23.601083 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:51:23.620399 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:51:23.624254 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:51:23.635666 kubelet[2339]: E0916 04:51:23.635626 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" Sep 16 04:51:23.636860 kubelet[2339]: E0916 04:51:23.636829 2339 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:51:23.639510 kubelet[2339]: I0916 04:51:23.639485 2339 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:51:23.639787 kubelet[2339]: I0916 04:51:23.639747 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:51:23.639787 kubelet[2339]: I0916 04:51:23.639762 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:51:23.640042 kubelet[2339]: I0916 04:51:23.639985 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:51:23.641334 kubelet[2339]: E0916 04:51:23.641309 2339 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 16 04:51:23.742102 kubelet[2339]: I0916 04:51:23.742043 2339 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:51:23.742525 kubelet[2339]: E0916 04:51:23.742477 2339 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 16 04:51:23.768197 systemd[1]: Created slice kubepods-burstable-pod0c5d0807df69b684f4a4aace9f833ba7.slice - libcontainer container kubepods-burstable-pod0c5d0807df69b684f4a4aace9f833ba7.slice. Sep 16 04:51:23.796670 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 16 04:51:23.815010 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 16 04:51:23.836630 kubelet[2339]: I0916 04:51:23.836556 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c5d0807df69b684f4a4aace9f833ba7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c5d0807df69b684f4a4aace9f833ba7\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:51:23.837097 kubelet[2339]: I0916 04:51:23.836615 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c5d0807df69b684f4a4aace9f833ba7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c5d0807df69b684f4a4aace9f833ba7\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:51:23.837097 kubelet[2339]: I0916 04:51:23.836677 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:23.837097 kubelet[2339]: I0916 04:51:23.836699 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:23.837097 kubelet[2339]: I0916 04:51:23.836733 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:23.837097 kubelet[2339]: I0916 04:51:23.836753 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c5d0807df69b684f4a4aace9f833ba7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c5d0807df69b684f4a4aace9f833ba7\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:51:23.837248 kubelet[2339]: I0916 04:51:23.836853 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:23.837248 kubelet[2339]: I0916 04:51:23.836911 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:23.837248 kubelet[2339]: I0916 04:51:23.836945 2339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:51:23.944411 kubelet[2339]: I0916 04:51:23.944256 2339 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:51:23.944684 kubelet[2339]: E0916 04:51:23.944645 2339 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 16 04:51:23.962475 kubelet[2339]: W0916 04:51:23.962326 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:23.962475 kubelet[2339]: E0916 04:51:23.962426 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:24.067925 kubelet[2339]: W0916 04:51:24.067774 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:24.067925 kubelet[2339]: E0916 04:51:24.067907 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:24.094531 kubelet[2339]: E0916 04:51:24.094461 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:24.095355 containerd[1571]: time="2025-09-16T04:51:24.095296931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c5d0807df69b684f4a4aace9f833ba7,Namespace:kube-system,Attempt:0,}" Sep 16 04:51:24.112640 kubelet[2339]: E0916 04:51:24.112578 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:24.113197 containerd[1571]: time="2025-09-16T04:51:24.113135639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 16 04:51:24.118425 kubelet[2339]: E0916 04:51:24.118386 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:24.118817 containerd[1571]: time="2025-09-16T04:51:24.118772738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 16 04:51:24.234538 kubelet[2339]: W0916 04:51:24.234362 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:24.234538 kubelet[2339]: E0916 04:51:24.234413 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:24.309673 kubelet[2339]: W0916 04:51:24.309579 2339 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 16 04:51:24.309673 kubelet[2339]: E0916 04:51:24.309664 2339 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:51:24.346974 kubelet[2339]: I0916 04:51:24.346935 2339 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:51:24.347314 kubelet[2339]: E0916 04:51:24.347287 2339 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 16 04:51:24.436209 kubelet[2339]: E0916 04:51:24.436154 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" Sep 16 04:51:24.542006 containerd[1571]: time="2025-09-16T04:51:24.541595100Z" level=info msg="connecting to shim 58bb60747ef976405fd949aad9b0b69a6d00ebd377c87daeab8c70545a876a19" address="unix:///run/containerd/s/a9662f87a061dbb180284b6c7e1534af91808efb6c48fc37c815c9d037dc373d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:51:24.545725 containerd[1571]: time="2025-09-16T04:51:24.545660212Z" level=info msg="connecting to shim 57388fc70639d80e8b9a2c771111d591111299c26486af27664fc2c52b291976" address="unix:///run/containerd/s/d75533c92718d1886f9f0b16f6f6d1a23710708e763b82779fddc7220d0c801a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:51:24.546170 containerd[1571]: time="2025-09-16T04:51:24.546098033Z" level=info msg="connecting to shim e25cf7dfca7748a1649512ca9f235b88818535ab39294b294d1cccaf5409db28" address="unix:///run/containerd/s/0681cac0adbe9530448219fa022f64de0b028ff6b4330493214fd15ef1b4f254" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:51:24.573685 systemd[1]: Started cri-containerd-58bb60747ef976405fd949aad9b0b69a6d00ebd377c87daeab8c70545a876a19.scope - libcontainer container 58bb60747ef976405fd949aad9b0b69a6d00ebd377c87daeab8c70545a876a19. Sep 16 04:51:24.578687 systemd[1]: Started cri-containerd-57388fc70639d80e8b9a2c771111d591111299c26486af27664fc2c52b291976.scope - libcontainer container 57388fc70639d80e8b9a2c771111d591111299c26486af27664fc2c52b291976. Sep 16 04:51:24.586095 systemd[1]: Started cri-containerd-e25cf7dfca7748a1649512ca9f235b88818535ab39294b294d1cccaf5409db28.scope - libcontainer container e25cf7dfca7748a1649512ca9f235b88818535ab39294b294d1cccaf5409db28. Sep 16 04:51:24.640189 containerd[1571]: time="2025-09-16T04:51:24.638844723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"58bb60747ef976405fd949aad9b0b69a6d00ebd377c87daeab8c70545a876a19\"" Sep 16 04:51:24.641702 kubelet[2339]: E0916 04:51:24.641674 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:24.642962 containerd[1571]: time="2025-09-16T04:51:24.642910386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c5d0807df69b684f4a4aace9f833ba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"57388fc70639d80e8b9a2c771111d591111299c26486af27664fc2c52b291976\"" Sep 16 04:51:24.645264 containerd[1571]: time="2025-09-16T04:51:24.645217252Z" level=info msg="CreateContainer within sandbox \"58bb60747ef976405fd949aad9b0b69a6d00ebd377c87daeab8c70545a876a19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:51:24.646666 kubelet[2339]: E0916 04:51:24.646642 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:24.649583 containerd[1571]: time="2025-09-16T04:51:24.649543844Z" level=info msg="CreateContainer within sandbox \"57388fc70639d80e8b9a2c771111d591111299c26486af27664fc2c52b291976\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:51:24.658379 containerd[1571]: time="2025-09-16T04:51:24.658315922Z" level=info msg="Container 77fc69f9bf29072c26405216c3adcefac7f881ec5d9455bee851d37f3ef52328: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:51:24.660126 containerd[1571]: time="2025-09-16T04:51:24.660072285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e25cf7dfca7748a1649512ca9f235b88818535ab39294b294d1cccaf5409db28\"" Sep 16 04:51:24.660764 kubelet[2339]: E0916 04:51:24.660719 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:24.662314 containerd[1571]: time="2025-09-16T04:51:24.662283962Z" level=info msg="CreateContainer within sandbox \"e25cf7dfca7748a1649512ca9f235b88818535ab39294b294d1cccaf5409db28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:51:24.664736 containerd[1571]: time="2025-09-16T04:51:24.664678521Z" level=info msg="Container 79efcb544ea68f8632151765f5f54ed1dcf4d9403d5e030804654de107d15b1f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:51:24.674517 containerd[1571]: time="2025-09-16T04:51:24.674444903Z" level=info msg="CreateContainer within sandbox \"57388fc70639d80e8b9a2c771111d591111299c26486af27664fc2c52b291976\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79efcb544ea68f8632151765f5f54ed1dcf4d9403d5e030804654de107d15b1f\"" Sep 16 04:51:24.675448 containerd[1571]: time="2025-09-16T04:51:24.675416816Z" level=info msg="StartContainer for \"79efcb544ea68f8632151765f5f54ed1dcf4d9403d5e030804654de107d15b1f\"" Sep 16 04:51:24.676245 containerd[1571]: time="2025-09-16T04:51:24.676201757Z" level=info msg="CreateContainer within sandbox \"58bb60747ef976405fd949aad9b0b69a6d00ebd377c87daeab8c70545a876a19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"77fc69f9bf29072c26405216c3adcefac7f881ec5d9455bee851d37f3ef52328\"" Sep 16 04:51:24.676463 containerd[1571]: time="2025-09-16T04:51:24.676416540Z" level=info msg="connecting to shim 79efcb544ea68f8632151765f5f54ed1dcf4d9403d5e030804654de107d15b1f" address="unix:///run/containerd/s/d75533c92718d1886f9f0b16f6f6d1a23710708e763b82779fddc7220d0c801a" protocol=ttrpc version=3 Sep 16 04:51:24.676632 containerd[1571]: time="2025-09-16T04:51:24.676487603Z" level=info msg="StartContainer for \"77fc69f9bf29072c26405216c3adcefac7f881ec5d9455bee851d37f3ef52328\"" Sep 16 04:51:24.680457 containerd[1571]: time="2025-09-16T04:51:24.680270877Z" level=info msg="Container e9380534b39dca6bdfa7bd81f691679569ee36b765d2b40d93fa82c92d5ef2b7: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:51:24.680805 containerd[1571]: time="2025-09-16T04:51:24.680779541Z" level=info msg="connecting to shim 77fc69f9bf29072c26405216c3adcefac7f881ec5d9455bee851d37f3ef52328" address="unix:///run/containerd/s/a9662f87a061dbb180284b6c7e1534af91808efb6c48fc37c815c9d037dc373d" protocol=ttrpc version=3 Sep 16 04:51:24.691125 containerd[1571]: time="2025-09-16T04:51:24.691047603Z" level=info msg="CreateContainer within sandbox \"e25cf7dfca7748a1649512ca9f235b88818535ab39294b294d1cccaf5409db28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e9380534b39dca6bdfa7bd81f691679569ee36b765d2b40d93fa82c92d5ef2b7\"" Sep 16 04:51:24.692472 containerd[1571]: time="2025-09-16T04:51:24.692400981Z" level=info msg="StartContainer for \"e9380534b39dca6bdfa7bd81f691679569ee36b765d2b40d93fa82c92d5ef2b7\"" Sep 16 04:51:24.693741 containerd[1571]: time="2025-09-16T04:51:24.693711328Z" level=info msg="connecting to shim e9380534b39dca6bdfa7bd81f691679569ee36b765d2b40d93fa82c92d5ef2b7" address="unix:///run/containerd/s/0681cac0adbe9530448219fa022f64de0b028ff6b4330493214fd15ef1b4f254" protocol=ttrpc version=3 Sep 16 04:51:24.701633 systemd[1]: Started cri-containerd-79efcb544ea68f8632151765f5f54ed1dcf4d9403d5e030804654de107d15b1f.scope - libcontainer container 79efcb544ea68f8632151765f5f54ed1dcf4d9403d5e030804654de107d15b1f. Sep 16 04:51:24.711661 systemd[1]: Started cri-containerd-77fc69f9bf29072c26405216c3adcefac7f881ec5d9455bee851d37f3ef52328.scope - libcontainer container 77fc69f9bf29072c26405216c3adcefac7f881ec5d9455bee851d37f3ef52328. Sep 16 04:51:24.735709 systemd[1]: Started cri-containerd-e9380534b39dca6bdfa7bd81f691679569ee36b765d2b40d93fa82c92d5ef2b7.scope - libcontainer container e9380534b39dca6bdfa7bd81f691679569ee36b765d2b40d93fa82c92d5ef2b7. Sep 16 04:51:24.787587 containerd[1571]: time="2025-09-16T04:51:24.787497657Z" level=info msg="StartContainer for \"77fc69f9bf29072c26405216c3adcefac7f881ec5d9455bee851d37f3ef52328\" returns successfully" Sep 16 04:51:24.796564 containerd[1571]: time="2025-09-16T04:51:24.795325284Z" level=info msg="StartContainer for \"79efcb544ea68f8632151765f5f54ed1dcf4d9403d5e030804654de107d15b1f\" returns successfully" Sep 16 04:51:24.825542 containerd[1571]: time="2025-09-16T04:51:24.825465126Z" level=info msg="StartContainer for \"e9380534b39dca6bdfa7bd81f691679569ee36b765d2b40d93fa82c92d5ef2b7\" returns successfully" Sep 16 04:51:25.089777 kubelet[2339]: E0916 04:51:25.072519 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:25.089777 kubelet[2339]: E0916 04:51:25.085946 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:25.089777 kubelet[2339]: E0916 04:51:25.088359 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:25.151475 kubelet[2339]: I0916 04:51:25.149728 2339 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:51:26.083002 kubelet[2339]: E0916 04:51:26.082942 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:26.218485 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1472461687 wd_nsec: 1472461445 Sep 16 04:51:27.084661 kubelet[2339]: E0916 04:51:27.084582 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:27.991465 kubelet[2339]: E0916 04:51:27.991391 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 16 04:51:28.022318 kubelet[2339]: I0916 04:51:28.022268 2339 apiserver.go:52] "Watching apiserver" Sep 16 04:51:28.032733 kubelet[2339]: I0916 04:51:28.032668 2339 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:51:28.121839 kubelet[2339]: I0916 04:51:28.121737 2339 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:51:28.958970 kubelet[2339]: E0916 04:51:28.958921 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:29.087104 kubelet[2339]: E0916 04:51:29.087043 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:30.265885 kubelet[2339]: E0916 04:51:30.265828 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:30.711867 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-7.scope)... Sep 16 04:51:30.711884 systemd[1]: Reloading... Sep 16 04:51:30.898472 zram_generator::config[2662]: No configuration found. Sep 16 04:51:31.089706 kubelet[2339]: E0916 04:51:31.089663 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:31.164636 systemd[1]: Reloading finished in 452 ms. Sep 16 04:51:31.201244 kubelet[2339]: I0916 04:51:31.201156 2339 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:51:31.201239 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:51:31.219053 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:51:31.219412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:51:31.219497 systemd[1]: kubelet.service: Consumed 832ms CPU time, 131.5M memory peak. Sep 16 04:51:31.222662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:51:31.444770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:51:31.458896 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:51:31.515747 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:51:31.515747 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:51:31.515747 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:51:31.515747 kubelet[2707]: I0916 04:51:31.515401 2707 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:51:31.522571 kubelet[2707]: I0916 04:51:31.522539 2707 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:51:31.522571 kubelet[2707]: I0916 04:51:31.522562 2707 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:51:31.522832 kubelet[2707]: I0916 04:51:31.522809 2707 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:51:31.524091 kubelet[2707]: I0916 04:51:31.524061 2707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:51:31.525821 kubelet[2707]: I0916 04:51:31.525781 2707 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:51:31.534346 kubelet[2707]: I0916 04:51:31.534298 2707 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:51:31.539046 kubelet[2707]: I0916 04:51:31.539019 2707 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:51:31.539178 kubelet[2707]: I0916 04:51:31.539161 2707 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:51:31.539332 kubelet[2707]: I0916 04:51:31.539299 2707 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:51:31.539505 kubelet[2707]: I0916 04:51:31.539322 2707 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:51:31.539625 kubelet[2707]: I0916 04:51:31.539508 2707 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:51:31.539625 kubelet[2707]: I0916 04:51:31.539516 2707 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:51:31.539625 kubelet[2707]: I0916 04:51:31.539544 2707 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:51:31.539697 kubelet[2707]: I0916 04:51:31.539646 2707 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:51:31.539697 kubelet[2707]: I0916 04:51:31.539658 2707 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:51:31.539697 kubelet[2707]: I0916 04:51:31.539689 2707 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:51:31.539762 kubelet[2707]: I0916 04:51:31.539700 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:51:31.544220 kubelet[2707]: I0916 04:51:31.543614 2707 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:51:31.544220 kubelet[2707]: I0916 04:51:31.544017 2707 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:51:31.545295 kubelet[2707]: I0916 04:51:31.545279 2707 server.go:1274] "Started kubelet" Sep 16 04:51:31.545507 kubelet[2707]: I0916 04:51:31.545474 2707 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:51:31.545990 kubelet[2707]: I0916 04:51:31.545961 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:51:31.546316 kubelet[2707]: I0916 04:51:31.546298 2707 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:51:31.547308 kubelet[2707]: I0916 04:51:31.547279 2707 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:51:31.549134 kubelet[2707]: I0916 04:51:31.549117 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:51:31.549720 kubelet[2707]: E0916 04:51:31.549684 2707 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:51:31.549782 kubelet[2707]: I0916 04:51:31.549769 2707 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:51:31.550701 kubelet[2707]: I0916 04:51:31.550685 2707 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:51:31.551896 kubelet[2707]: I0916 04:51:31.551880 2707 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:51:31.552091 kubelet[2707]: I0916 04:51:31.552079 2707 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:51:31.661239 kubelet[2707]: I0916 04:51:31.661202 2707 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:51:31.661510 kubelet[2707]: I0916 04:51:31.661497 2707 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:51:31.661808 kubelet[2707]: I0916 04:51:31.661775 2707 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:51:31.667648 kubelet[2707]: I0916 04:51:31.667274 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:51:31.668842 kubelet[2707]: I0916 04:51:31.668806 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:51:31.668842 kubelet[2707]: I0916 04:51:31.668837 2707 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:51:31.668947 kubelet[2707]: I0916 04:51:31.668860 2707 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:51:31.668947 kubelet[2707]: E0916 04:51:31.668908 2707 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:51:31.713550 kubelet[2707]: I0916 04:51:31.713219 2707 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:51:31.713550 kubelet[2707]: I0916 04:51:31.713238 2707 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:51:31.713550 kubelet[2707]: I0916 04:51:31.713257 2707 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:51:31.713550 kubelet[2707]: I0916 04:51:31.713415 2707 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:51:31.713550 kubelet[2707]: I0916 04:51:31.713428 2707 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:51:31.713550 kubelet[2707]: I0916 04:51:31.713495 2707 policy_none.go:49] "None policy: Start" Sep 16 04:51:31.714451 kubelet[2707]: I0916 04:51:31.714398 2707 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:51:31.715163 kubelet[2707]: I0916 04:51:31.714942 2707 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:51:31.715310 kubelet[2707]: I0916 04:51:31.715172 2707 state_mem.go:75] "Updated machine memory state" Sep 16 04:51:31.723169 kubelet[2707]: I0916 04:51:31.723135 2707 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:51:31.723545 kubelet[2707]: I0916 04:51:31.723522 2707 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:51:31.723718 kubelet[2707]: I0916 04:51:31.723668 2707 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:51:31.724359 kubelet[2707]: I0916 04:51:31.724328 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:51:31.776857 kubelet[2707]: E0916 04:51:31.776781 2707 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 16 04:51:31.777134 kubelet[2707]: E0916 04:51:31.776802 2707 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:31.838601 kubelet[2707]: I0916 04:51:31.838550 2707 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:51:31.850314 kubelet[2707]: I0916 04:51:31.849413 2707 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 16 04:51:31.850314 kubelet[2707]: I0916 04:51:31.849582 2707 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:51:31.861359 kubelet[2707]: I0916 04:51:31.860630 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:51:31.861359 kubelet[2707]: I0916 04:51:31.860742 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c5d0807df69b684f4a4aace9f833ba7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c5d0807df69b684f4a4aace9f833ba7\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:51:31.861359 kubelet[2707]: I0916 04:51:31.860773 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:31.861359 kubelet[2707]: I0916 04:51:31.860805 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:31.861359 kubelet[2707]: I0916 04:51:31.860822 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:31.861661 kubelet[2707]: I0916 04:51:31.860844 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:31.861661 kubelet[2707]: I0916 04:51:31.860863 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c5d0807df69b684f4a4aace9f833ba7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c5d0807df69b684f4a4aace9f833ba7\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:51:31.861661 kubelet[2707]: I0916 04:51:31.860899 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c5d0807df69b684f4a4aace9f833ba7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c5d0807df69b684f4a4aace9f833ba7\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:51:31.861661 kubelet[2707]: I0916 04:51:31.860911 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:32.075403 kubelet[2707]: E0916 04:51:32.075355 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:32.077459 kubelet[2707]: E0916 04:51:32.077393 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:32.077640 kubelet[2707]: E0916 04:51:32.077532 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:32.540068 kubelet[2707]: I0916 04:51:32.539973 2707 apiserver.go:52] "Watching apiserver" Sep 16 04:51:32.553245 kubelet[2707]: I0916 04:51:32.553185 2707 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:51:32.666892 kubelet[2707]: I0916 04:51:32.666749 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.6667248199999998 podStartE2EDuration="2.66672482s" podCreationTimestamp="2025-09-16 04:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:51:32.666676708 +0000 UTC m=+1.199952074" watchObservedRunningTime="2025-09-16 04:51:32.66672482 +0000 UTC m=+1.200000176" Sep 16 04:51:32.675733 kubelet[2707]: I0916 04:51:32.675673 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.675652646 podStartE2EDuration="4.675652646s" podCreationTimestamp="2025-09-16 04:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:51:32.675554339 +0000 UTC m=+1.208829695" watchObservedRunningTime="2025-09-16 04:51:32.675652646 +0000 UTC m=+1.208928002" Sep 16 04:51:32.693721 kubelet[2707]: E0916 04:51:32.692946 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:32.698911 kubelet[2707]: E0916 04:51:32.698721 2707 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 16 04:51:32.699214 kubelet[2707]: E0916 04:51:32.699158 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:32.700046 kubelet[2707]: E0916 04:51:32.700024 2707 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:51:32.700172 kubelet[2707]: E0916 04:51:32.700156 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:32.702375 kubelet[2707]: I0916 04:51:32.702225 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7022110750000001 podStartE2EDuration="1.702211075s" podCreationTimestamp="2025-09-16 04:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:51:32.689990097 +0000 UTC m=+1.223265463" watchObservedRunningTime="2025-09-16 04:51:32.702211075 +0000 UTC m=+1.235486431" Sep 16 04:51:33.695131 kubelet[2707]: E0916 04:51:33.695064 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:33.695131 kubelet[2707]: E0916 04:51:33.695064 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:33.695131 kubelet[2707]: E0916 04:51:33.695114 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:34.696095 kubelet[2707]: E0916 04:51:34.696046 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:35.047705 kubelet[2707]: E0916 04:51:35.047651 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:35.565324 kubelet[2707]: I0916 04:51:35.565286 2707 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:51:35.565781 containerd[1571]: time="2025-09-16T04:51:35.565726383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:51:35.566298 kubelet[2707]: I0916 04:51:35.566028 2707 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:51:35.700364 kubelet[2707]: E0916 04:51:35.700243 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:35.701890 kubelet[2707]: E0916 04:51:35.700510 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:36.243704 systemd[1]: Created slice kubepods-besteffort-pod3684fdf5_13d6_429b_a327_b97de4003e3b.slice - libcontainer container kubepods-besteffort-pod3684fdf5_13d6_429b_a327_b97de4003e3b.slice. Sep 16 04:51:36.290458 kubelet[2707]: I0916 04:51:36.290382 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3684fdf5-13d6-429b-a327-b97de4003e3b-kube-proxy\") pod \"kube-proxy-266sh\" (UID: \"3684fdf5-13d6-429b-a327-b97de4003e3b\") " pod="kube-system/kube-proxy-266sh" Sep 16 04:51:36.290645 kubelet[2707]: I0916 04:51:36.290496 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3684fdf5-13d6-429b-a327-b97de4003e3b-xtables-lock\") pod \"kube-proxy-266sh\" (UID: \"3684fdf5-13d6-429b-a327-b97de4003e3b\") " pod="kube-system/kube-proxy-266sh" Sep 16 04:51:36.290645 kubelet[2707]: I0916 04:51:36.290526 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3684fdf5-13d6-429b-a327-b97de4003e3b-lib-modules\") pod \"kube-proxy-266sh\" (UID: \"3684fdf5-13d6-429b-a327-b97de4003e3b\") " pod="kube-system/kube-proxy-266sh" Sep 16 04:51:36.290645 kubelet[2707]: I0916 04:51:36.290545 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bfl\" (UniqueName: \"kubernetes.io/projected/3684fdf5-13d6-429b-a327-b97de4003e3b-kube-api-access-f6bfl\") pod \"kube-proxy-266sh\" (UID: \"3684fdf5-13d6-429b-a327-b97de4003e3b\") " pod="kube-system/kube-proxy-266sh" Sep 16 04:51:36.552863 kubelet[2707]: E0916 04:51:36.552775 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:36.553922 containerd[1571]: time="2025-09-16T04:51:36.553522042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-266sh,Uid:3684fdf5-13d6-429b-a327-b97de4003e3b,Namespace:kube-system,Attempt:0,}" Sep 16 04:51:36.649551 systemd[1]: Created slice kubepods-besteffort-pod92e9499d_f840_47f6_bceb_12c70fd5d26f.slice - libcontainer container kubepods-besteffort-pod92e9499d_f840_47f6_bceb_12c70fd5d26f.slice. Sep 16 04:51:36.665751 containerd[1571]: time="2025-09-16T04:51:36.665680519Z" level=info msg="connecting to shim 1d234bb6d9a453df52e86767e29ebb80a1c2552e5244d962f54fee56f1faae54" address="unix:///run/containerd/s/32645d3972317ca208c075490552f085b6a5d20d0257c59354ef0e965ce697c1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:51:36.693063 kubelet[2707]: I0916 04:51:36.692964 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92e9499d-f840-47f6-bceb-12c70fd5d26f-var-lib-calico\") pod \"tigera-operator-58fc44c59b-h62w4\" (UID: \"92e9499d-f840-47f6-bceb-12c70fd5d26f\") " pod="tigera-operator/tigera-operator-58fc44c59b-h62w4" Sep 16 04:51:36.693063 kubelet[2707]: I0916 04:51:36.693004 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnd8\" (UniqueName: \"kubernetes.io/projected/92e9499d-f840-47f6-bceb-12c70fd5d26f-kube-api-access-xhnd8\") pod \"tigera-operator-58fc44c59b-h62w4\" (UID: \"92e9499d-f840-47f6-bceb-12c70fd5d26f\") " pod="tigera-operator/tigera-operator-58fc44c59b-h62w4" Sep 16 04:51:36.695612 systemd[1]: Started cri-containerd-1d234bb6d9a453df52e86767e29ebb80a1c2552e5244d962f54fee56f1faae54.scope - libcontainer container 1d234bb6d9a453df52e86767e29ebb80a1c2552e5244d962f54fee56f1faae54. Sep 16 04:51:36.737653 containerd[1571]: time="2025-09-16T04:51:36.737584346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-266sh,Uid:3684fdf5-13d6-429b-a327-b97de4003e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d234bb6d9a453df52e86767e29ebb80a1c2552e5244d962f54fee56f1faae54\"" Sep 16 04:51:36.738699 kubelet[2707]: E0916 04:51:36.738670 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:36.741088 containerd[1571]: time="2025-09-16T04:51:36.741040371Z" level=info msg="CreateContainer within sandbox \"1d234bb6d9a453df52e86767e29ebb80a1c2552e5244d962f54fee56f1faae54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:51:36.753683 containerd[1571]: time="2025-09-16T04:51:36.753621655Z" level=info msg="Container 48e0ffd37df89ddce11d04a6a473d41bcd43bd93e6a3830bf2a021fd9e5c3e20: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:51:36.758761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314666682.mount: Deactivated successfully. Sep 16 04:51:36.763886 containerd[1571]: time="2025-09-16T04:51:36.763832971Z" level=info msg="CreateContainer within sandbox \"1d234bb6d9a453df52e86767e29ebb80a1c2552e5244d962f54fee56f1faae54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48e0ffd37df89ddce11d04a6a473d41bcd43bd93e6a3830bf2a021fd9e5c3e20\"" Sep 16 04:51:36.764591 containerd[1571]: time="2025-09-16T04:51:36.764550134Z" level=info msg="StartContainer for \"48e0ffd37df89ddce11d04a6a473d41bcd43bd93e6a3830bf2a021fd9e5c3e20\"" Sep 16 04:51:36.766138 containerd[1571]: time="2025-09-16T04:51:36.766054617Z" level=info msg="connecting to shim 48e0ffd37df89ddce11d04a6a473d41bcd43bd93e6a3830bf2a021fd9e5c3e20" address="unix:///run/containerd/s/32645d3972317ca208c075490552f085b6a5d20d0257c59354ef0e965ce697c1" protocol=ttrpc version=3 Sep 16 04:51:36.787584 systemd[1]: Started cri-containerd-48e0ffd37df89ddce11d04a6a473d41bcd43bd93e6a3830bf2a021fd9e5c3e20.scope - libcontainer container 48e0ffd37df89ddce11d04a6a473d41bcd43bd93e6a3830bf2a021fd9e5c3e20. Sep 16 04:51:36.876026 containerd[1571]: time="2025-09-16T04:51:36.875802187Z" level=info msg="StartContainer for \"48e0ffd37df89ddce11d04a6a473d41bcd43bd93e6a3830bf2a021fd9e5c3e20\" returns successfully" Sep 16 04:51:36.953976 containerd[1571]: time="2025-09-16T04:51:36.953901430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-h62w4,Uid:92e9499d-f840-47f6-bceb-12c70fd5d26f,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:51:36.981915 containerd[1571]: time="2025-09-16T04:51:36.981841823Z" level=info msg="connecting to shim 9d8b3560594a7ca82a4548ee6f581d318bab181aac1c31af307b4224c86bb099" address="unix:///run/containerd/s/2ec7a4275a789e3260d847742b29bee602185f21897bc063f414a5c4d183e74f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:51:37.031615 systemd[1]: Started cri-containerd-9d8b3560594a7ca82a4548ee6f581d318bab181aac1c31af307b4224c86bb099.scope - libcontainer container 9d8b3560594a7ca82a4548ee6f581d318bab181aac1c31af307b4224c86bb099. Sep 16 04:51:37.109344 containerd[1571]: time="2025-09-16T04:51:37.108573536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-h62w4,Uid:92e9499d-f840-47f6-bceb-12c70fd5d26f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9d8b3560594a7ca82a4548ee6f581d318bab181aac1c31af307b4224c86bb099\"" Sep 16 04:51:37.111884 containerd[1571]: time="2025-09-16T04:51:37.111800379Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:51:37.705284 kubelet[2707]: E0916 04:51:37.705244 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:37.715949 kubelet[2707]: I0916 04:51:37.715862 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-266sh" podStartSLOduration=1.715820846 podStartE2EDuration="1.715820846s" podCreationTimestamp="2025-09-16 04:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:51:37.71532836 +0000 UTC m=+6.248603726" watchObservedRunningTime="2025-09-16 04:51:37.715820846 +0000 UTC m=+6.249096202" Sep 16 04:51:38.839283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028584243.mount: Deactivated successfully. Sep 16 04:51:39.440350 containerd[1571]: time="2025-09-16T04:51:39.440283853Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:39.441181 containerd[1571]: time="2025-09-16T04:51:39.441139206Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 16 04:51:39.442413 containerd[1571]: time="2025-09-16T04:51:39.442369000Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:39.447033 containerd[1571]: time="2025-09-16T04:51:39.446983335Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:39.447707 containerd[1571]: time="2025-09-16T04:51:39.447663756Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.335805587s" Sep 16 04:51:39.447707 containerd[1571]: time="2025-09-16T04:51:39.447699675Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 16 04:51:39.451983 containerd[1571]: time="2025-09-16T04:51:39.451918389Z" level=info msg="CreateContainer within sandbox \"9d8b3560594a7ca82a4548ee6f581d318bab181aac1c31af307b4224c86bb099\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:51:39.478100 containerd[1571]: time="2025-09-16T04:51:39.478055827Z" level=info msg="Container 75d98b3120ea49d016885b405785c07ba78cd2bbb60c7b33856b6feeb0aa621a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:51:39.521760 containerd[1571]: time="2025-09-16T04:51:39.521674585Z" level=info msg="CreateContainer within sandbox \"9d8b3560594a7ca82a4548ee6f581d318bab181aac1c31af307b4224c86bb099\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"75d98b3120ea49d016885b405785c07ba78cd2bbb60c7b33856b6feeb0aa621a\"" Sep 16 04:51:39.522347 containerd[1571]: time="2025-09-16T04:51:39.522295864Z" level=info msg="StartContainer for \"75d98b3120ea49d016885b405785c07ba78cd2bbb60c7b33856b6feeb0aa621a\"" Sep 16 04:51:39.524006 containerd[1571]: time="2025-09-16T04:51:39.523930606Z" level=info msg="connecting to shim 75d98b3120ea49d016885b405785c07ba78cd2bbb60c7b33856b6feeb0aa621a" address="unix:///run/containerd/s/2ec7a4275a789e3260d847742b29bee602185f21897bc063f414a5c4d183e74f" protocol=ttrpc version=3 Sep 16 04:51:39.593655 systemd[1]: Started cri-containerd-75d98b3120ea49d016885b405785c07ba78cd2bbb60c7b33856b6feeb0aa621a.scope - libcontainer container 75d98b3120ea49d016885b405785c07ba78cd2bbb60c7b33856b6feeb0aa621a. Sep 16 04:51:39.634545 containerd[1571]: time="2025-09-16T04:51:39.634492253Z" level=info msg="StartContainer for \"75d98b3120ea49d016885b405785c07ba78cd2bbb60c7b33856b6feeb0aa621a\" returns successfully" Sep 16 04:51:39.674927 kubelet[2707]: E0916 04:51:39.674853 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:39.713616 kubelet[2707]: E0916 04:51:39.713389 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:40.616654 update_engine[1508]: I20250916 04:51:40.616518 1508 update_attempter.cc:509] Updating boot flags... Sep 16 04:51:46.435067 sudo[1776]: pam_unix(sudo:session): session closed for user root Sep 16 04:51:46.437947 sshd[1775]: Connection closed by 10.0.0.1 port 58198 Sep 16 04:51:46.439089 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Sep 16 04:51:46.447411 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:58198.service: Deactivated successfully. Sep 16 04:51:46.451293 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:51:46.451847 systemd[1]: session-7.scope: Consumed 5.645s CPU time, 227.5M memory peak. Sep 16 04:51:46.457170 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:51:46.461650 systemd-logind[1507]: Removed session 7. Sep 16 04:51:49.370366 kubelet[2707]: I0916 04:51:49.370277 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-h62w4" podStartSLOduration=11.030485319 podStartE2EDuration="13.370228191s" podCreationTimestamp="2025-09-16 04:51:36 +0000 UTC" firstStartedPulling="2025-09-16 04:51:37.110631537 +0000 UTC m=+5.643906894" lastFinishedPulling="2025-09-16 04:51:39.45037441 +0000 UTC m=+7.983649766" observedRunningTime="2025-09-16 04:51:39.741863024 +0000 UTC m=+8.275138390" watchObservedRunningTime="2025-09-16 04:51:49.370228191 +0000 UTC m=+17.903503547" Sep 16 04:51:49.388569 systemd[1]: Created slice kubepods-besteffort-podda95e290_5e9c_4f32_a31e_7f2c7519585f.slice - libcontainer container kubepods-besteffort-podda95e290_5e9c_4f32_a31e_7f2c7519585f.slice. Sep 16 04:51:49.483048 kubelet[2707]: I0916 04:51:49.482971 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da95e290-5e9c-4f32-a31e-7f2c7519585f-tigera-ca-bundle\") pod \"calico-typha-6ccf8655cb-xmv5g\" (UID: \"da95e290-5e9c-4f32-a31e-7f2c7519585f\") " pod="calico-system/calico-typha-6ccf8655cb-xmv5g" Sep 16 04:51:49.483048 kubelet[2707]: I0916 04:51:49.483040 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/da95e290-5e9c-4f32-a31e-7f2c7519585f-typha-certs\") pod \"calico-typha-6ccf8655cb-xmv5g\" (UID: \"da95e290-5e9c-4f32-a31e-7f2c7519585f\") " pod="calico-system/calico-typha-6ccf8655cb-xmv5g" Sep 16 04:51:49.483302 kubelet[2707]: I0916 04:51:49.483100 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb65c\" (UniqueName: \"kubernetes.io/projected/da95e290-5e9c-4f32-a31e-7f2c7519585f-kube-api-access-pb65c\") pod \"calico-typha-6ccf8655cb-xmv5g\" (UID: \"da95e290-5e9c-4f32-a31e-7f2c7519585f\") " pod="calico-system/calico-typha-6ccf8655cb-xmv5g" Sep 16 04:51:49.696013 kubelet[2707]: E0916 04:51:49.695556 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:49.696529 containerd[1571]: time="2025-09-16T04:51:49.696337703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ccf8655cb-xmv5g,Uid:da95e290-5e9c-4f32-a31e-7f2c7519585f,Namespace:calico-system,Attempt:0,}" Sep 16 04:51:49.869844 systemd[1]: Created slice kubepods-besteffort-pod1e9c6e41_0666_4f0b_a601_ba0f4d6e970f.slice - libcontainer container kubepods-besteffort-pod1e9c6e41_0666_4f0b_a601_ba0f4d6e970f.slice. Sep 16 04:51:49.886370 kubelet[2707]: I0916 04:51:49.886310 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-cni-net-dir\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886534 kubelet[2707]: I0916 04:51:49.886380 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-cni-bin-dir\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886534 kubelet[2707]: I0916 04:51:49.886407 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-lib-modules\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886534 kubelet[2707]: I0916 04:51:49.886451 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-policysync\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886534 kubelet[2707]: I0916 04:51:49.886478 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-xtables-lock\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886534 kubelet[2707]: I0916 04:51:49.886499 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-tigera-ca-bundle\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886655 kubelet[2707]: I0916 04:51:49.886516 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-var-lib-calico\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886655 kubelet[2707]: I0916 04:51:49.886535 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-var-run-calico\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886655 kubelet[2707]: I0916 04:51:49.886552 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-flexvol-driver-host\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886655 kubelet[2707]: I0916 04:51:49.886573 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5wd\" (UniqueName: \"kubernetes.io/projected/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-kube-api-access-fw5wd\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886655 kubelet[2707]: I0916 04:51:49.886587 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-cni-log-dir\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.886797 kubelet[2707]: I0916 04:51:49.886605 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e9c6e41-0666-4f0b-a601-ba0f4d6e970f-node-certs\") pod \"calico-node-khhpb\" (UID: \"1e9c6e41-0666-4f0b-a601-ba0f4d6e970f\") " pod="calico-system/calico-node-khhpb" Sep 16 04:51:49.914325 containerd[1571]: time="2025-09-16T04:51:49.914235459Z" level=info msg="connecting to shim eaa712f4f7f806ddb4ee47a598eea06e3ac777456ca56de4e8548e3bcc3fc253" address="unix:///run/containerd/s/f07e206c7c8927850f4aca3abb50b7998a34f8299419904161e671405a9364b3" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:51:49.963931 systemd[1]: Started cri-containerd-eaa712f4f7f806ddb4ee47a598eea06e3ac777456ca56de4e8548e3bcc3fc253.scope - libcontainer container eaa712f4f7f806ddb4ee47a598eea06e3ac777456ca56de4e8548e3bcc3fc253. Sep 16 04:51:50.002484 kubelet[2707]: E0916 04:51:50.000620 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxpq2" podUID="d6c9282e-1ae2-4573-883a-f016a02e49ed" Sep 16 04:51:50.026680 kubelet[2707]: E0916 04:51:50.026613 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.026680 kubelet[2707]: W0916 04:51:50.026659 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.026680 kubelet[2707]: E0916 04:51:50.026693 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.066880 kubelet[2707]: E0916 04:51:50.066821 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.066880 kubelet[2707]: W0916 04:51:50.066857 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.066880 kubelet[2707]: E0916 04:51:50.066891 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.067272 containerd[1571]: time="2025-09-16T04:51:50.067214986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ccf8655cb-xmv5g,Uid:da95e290-5e9c-4f32-a31e-7f2c7519585f,Namespace:calico-system,Attempt:0,} returns sandbox id \"eaa712f4f7f806ddb4ee47a598eea06e3ac777456ca56de4e8548e3bcc3fc253\"" Sep 16 04:51:50.069472 kubelet[2707]: E0916 04:51:50.068593 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.069472 kubelet[2707]: W0916 04:51:50.068614 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.069472 kubelet[2707]: E0916 04:51:50.068628 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.070348 kubelet[2707]: E0916 04:51:50.070128 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.070348 kubelet[2707]: W0916 04:51:50.070340 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.070348 kubelet[2707]: E0916 04:51:50.070355 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.070962 kubelet[2707]: E0916 04:51:50.070821 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:50.071130 kubelet[2707]: E0916 04:51:50.071105 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.071130 kubelet[2707]: W0916 04:51:50.071120 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.071130 kubelet[2707]: E0916 04:51:50.071130 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.072614 kubelet[2707]: E0916 04:51:50.072587 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.072803 kubelet[2707]: W0916 04:51:50.072759 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.072861 kubelet[2707]: E0916 04:51:50.072840 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.073920 containerd[1571]: time="2025-09-16T04:51:50.073857955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:51:50.074207 kubelet[2707]: E0916 04:51:50.074181 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.074207 kubelet[2707]: W0916 04:51:50.074197 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.074207 kubelet[2707]: E0916 04:51:50.074207 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.075523 kubelet[2707]: E0916 04:51:50.075496 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.075523 kubelet[2707]: W0916 04:51:50.075513 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.075615 kubelet[2707]: E0916 04:51:50.075522 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.075814 kubelet[2707]: E0916 04:51:50.075786 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.075814 kubelet[2707]: W0916 04:51:50.075802 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.075894 kubelet[2707]: E0916 04:51:50.075817 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.076082 kubelet[2707]: E0916 04:51:50.076056 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.076082 kubelet[2707]: W0916 04:51:50.076077 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.076158 kubelet[2707]: E0916 04:51:50.076090 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.077475 kubelet[2707]: E0916 04:51:50.076292 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.077475 kubelet[2707]: W0916 04:51:50.076305 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.077475 kubelet[2707]: E0916 04:51:50.076314 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.077605 kubelet[2707]: E0916 04:51:50.077503 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.077605 kubelet[2707]: W0916 04:51:50.077513 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.077605 kubelet[2707]: E0916 04:51:50.077523 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.077765 kubelet[2707]: E0916 04:51:50.077725 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.077765 kubelet[2707]: W0916 04:51:50.077755 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.077765 kubelet[2707]: E0916 04:51:50.077768 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.079897 kubelet[2707]: E0916 04:51:50.079862 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.079897 kubelet[2707]: W0916 04:51:50.079881 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.079897 kubelet[2707]: E0916 04:51:50.079891 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.080124 kubelet[2707]: E0916 04:51:50.080095 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.080124 kubelet[2707]: W0916 04:51:50.080114 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.080124 kubelet[2707]: E0916 04:51:50.080127 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.080404 kubelet[2707]: E0916 04:51:50.080376 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.080404 kubelet[2707]: W0916 04:51:50.080390 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.080404 kubelet[2707]: E0916 04:51:50.080399 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.081685 kubelet[2707]: E0916 04:51:50.081652 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.081685 kubelet[2707]: W0916 04:51:50.081670 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.081685 kubelet[2707]: E0916 04:51:50.081680 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.081992 kubelet[2707]: E0916 04:51:50.081959 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.081992 kubelet[2707]: W0916 04:51:50.081979 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.081992 kubelet[2707]: E0916 04:51:50.081993 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.082923 kubelet[2707]: E0916 04:51:50.082200 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.082923 kubelet[2707]: W0916 04:51:50.082216 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.082923 kubelet[2707]: E0916 04:51:50.082228 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.082923 kubelet[2707]: E0916 04:51:50.082421 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.082923 kubelet[2707]: W0916 04:51:50.082452 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.082923 kubelet[2707]: E0916 04:51:50.082463 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.082923 kubelet[2707]: E0916 04:51:50.082680 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.082923 kubelet[2707]: W0916 04:51:50.082690 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.082923 kubelet[2707]: E0916 04:51:50.082700 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.089393 kubelet[2707]: E0916 04:51:50.089023 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.089393 kubelet[2707]: W0916 04:51:50.089066 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.089393 kubelet[2707]: E0916 04:51:50.089107 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.089393 kubelet[2707]: I0916 04:51:50.089174 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6c9282e-1ae2-4573-883a-f016a02e49ed-registration-dir\") pod \"csi-node-driver-mxpq2\" (UID: \"d6c9282e-1ae2-4573-883a-f016a02e49ed\") " pod="calico-system/csi-node-driver-mxpq2" Sep 16 04:51:50.089983 kubelet[2707]: E0916 04:51:50.089961 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.090091 kubelet[2707]: W0916 04:51:50.090074 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.090166 kubelet[2707]: E0916 04:51:50.090151 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.090256 kubelet[2707]: I0916 04:51:50.090238 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlswg\" (UniqueName: \"kubernetes.io/projected/d6c9282e-1ae2-4573-883a-f016a02e49ed-kube-api-access-zlswg\") pod \"csi-node-driver-mxpq2\" (UID: \"d6c9282e-1ae2-4573-883a-f016a02e49ed\") " pod="calico-system/csi-node-driver-mxpq2" Sep 16 04:51:50.090624 kubelet[2707]: E0916 04:51:50.090606 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.090891 kubelet[2707]: W0916 04:51:50.090704 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.090891 kubelet[2707]: E0916 04:51:50.090781 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.090891 kubelet[2707]: I0916 04:51:50.090874 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6c9282e-1ae2-4573-883a-f016a02e49ed-socket-dir\") pod \"csi-node-driver-mxpq2\" (UID: \"d6c9282e-1ae2-4573-883a-f016a02e49ed\") " pod="calico-system/csi-node-driver-mxpq2" Sep 16 04:51:50.091084 kubelet[2707]: E0916 04:51:50.091068 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.091160 kubelet[2707]: W0916 04:51:50.091145 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.091377 kubelet[2707]: E0916 04:51:50.091336 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.091612 kubelet[2707]: E0916 04:51:50.091594 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.091816 kubelet[2707]: W0916 04:51:50.091670 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.091816 kubelet[2707]: E0916 04:51:50.091698 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.091963 kubelet[2707]: E0916 04:51:50.091948 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.092034 kubelet[2707]: W0916 04:51:50.092022 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.092115 kubelet[2707]: E0916 04:51:50.092099 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.092232 kubelet[2707]: I0916 04:51:50.092216 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6c9282e-1ae2-4573-883a-f016a02e49ed-kubelet-dir\") pod \"csi-node-driver-mxpq2\" (UID: \"d6c9282e-1ae2-4573-883a-f016a02e49ed\") " pod="calico-system/csi-node-driver-mxpq2" Sep 16 04:51:50.092406 kubelet[2707]: E0916 04:51:50.092379 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.092406 kubelet[2707]: W0916 04:51:50.092401 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.092529 kubelet[2707]: E0916 04:51:50.092424 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.092644 kubelet[2707]: E0916 04:51:50.092624 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.092644 kubelet[2707]: W0916 04:51:50.092638 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.092762 kubelet[2707]: E0916 04:51:50.092650 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.092892 kubelet[2707]: E0916 04:51:50.092873 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.092892 kubelet[2707]: W0916 04:51:50.092886 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.093008 kubelet[2707]: E0916 04:51:50.092904 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.093100 kubelet[2707]: E0916 04:51:50.093082 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.093100 kubelet[2707]: W0916 04:51:50.093095 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.093218 kubelet[2707]: E0916 04:51:50.093114 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.093218 kubelet[2707]: I0916 04:51:50.093138 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d6c9282e-1ae2-4573-883a-f016a02e49ed-varrun\") pod \"csi-node-driver-mxpq2\" (UID: \"d6c9282e-1ae2-4573-883a-f016a02e49ed\") " pod="calico-system/csi-node-driver-mxpq2" Sep 16 04:51:50.093338 kubelet[2707]: E0916 04:51:50.093320 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.093338 kubelet[2707]: W0916 04:51:50.093334 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.093390 kubelet[2707]: E0916 04:51:50.093345 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.093778 kubelet[2707]: E0916 04:51:50.093748 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.093778 kubelet[2707]: W0916 04:51:50.093763 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.094308 kubelet[2707]: E0916 04:51:50.094265 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.094613 kubelet[2707]: E0916 04:51:50.094591 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.094613 kubelet[2707]: W0916 04:51:50.094608 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.094712 kubelet[2707]: E0916 04:51:50.094620 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.094927 kubelet[2707]: E0916 04:51:50.094907 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.094927 kubelet[2707]: W0916 04:51:50.094922 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.095026 kubelet[2707]: E0916 04:51:50.094935 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.095244 kubelet[2707]: E0916 04:51:50.095222 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.095244 kubelet[2707]: W0916 04:51:50.095238 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.095341 kubelet[2707]: E0916 04:51:50.095252 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.177257 containerd[1571]: time="2025-09-16T04:51:50.177145566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khhpb,Uid:1e9c6e41-0666-4f0b-a601-ba0f4d6e970f,Namespace:calico-system,Attempt:0,}" Sep 16 04:51:50.194087 kubelet[2707]: E0916 04:51:50.194047 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.194087 kubelet[2707]: W0916 04:51:50.194071 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.194087 kubelet[2707]: E0916 04:51:50.194100 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.194488 kubelet[2707]: E0916 04:51:50.194471 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.194488 kubelet[2707]: W0916 04:51:50.194485 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.194563 kubelet[2707]: E0916 04:51:50.194500 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.194846 kubelet[2707]: E0916 04:51:50.194814 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.194897 kubelet[2707]: W0916 04:51:50.194843 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.194897 kubelet[2707]: E0916 04:51:50.194877 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.195091 kubelet[2707]: E0916 04:51:50.195075 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.195091 kubelet[2707]: W0916 04:51:50.195087 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.195160 kubelet[2707]: E0916 04:51:50.195103 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.195538 kubelet[2707]: E0916 04:51:50.195478 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.195538 kubelet[2707]: W0916 04:51:50.195532 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.195636 kubelet[2707]: E0916 04:51:50.195582 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.196008 kubelet[2707]: E0916 04:51:50.195977 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.196008 kubelet[2707]: W0916 04:51:50.195993 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.196086 kubelet[2707]: E0916 04:51:50.196034 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.196286 kubelet[2707]: E0916 04:51:50.196248 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.196286 kubelet[2707]: W0916 04:51:50.196265 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.196359 kubelet[2707]: E0916 04:51:50.196297 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.196526 kubelet[2707]: E0916 04:51:50.196509 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.196526 kubelet[2707]: W0916 04:51:50.196523 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.196581 kubelet[2707]: E0916 04:51:50.196557 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.196786 kubelet[2707]: E0916 04:51:50.196767 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.196786 kubelet[2707]: W0916 04:51:50.196781 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.196851 kubelet[2707]: E0916 04:51:50.196799 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.197019 kubelet[2707]: E0916 04:51:50.197004 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.197019 kubelet[2707]: W0916 04:51:50.197017 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.197071 kubelet[2707]: E0916 04:51:50.197049 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.197348 kubelet[2707]: E0916 04:51:50.197329 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.197348 kubelet[2707]: W0916 04:51:50.197345 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.197406 kubelet[2707]: E0916 04:51:50.197367 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.197642 kubelet[2707]: E0916 04:51:50.197622 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.197642 kubelet[2707]: W0916 04:51:50.197637 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.197756 kubelet[2707]: E0916 04:51:50.197727 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.197966 kubelet[2707]: E0916 04:51:50.197931 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.197966 kubelet[2707]: W0916 04:51:50.197945 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.197966 kubelet[2707]: E0916 04:51:50.197974 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.198226 kubelet[2707]: E0916 04:51:50.198150 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.198226 kubelet[2707]: W0916 04:51:50.198158 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.198226 kubelet[2707]: E0916 04:51:50.198185 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.198343 kubelet[2707]: E0916 04:51:50.198323 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.198343 kubelet[2707]: W0916 04:51:50.198337 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.198402 kubelet[2707]: E0916 04:51:50.198362 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.198552 kubelet[2707]: E0916 04:51:50.198533 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.198552 kubelet[2707]: W0916 04:51:50.198545 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.198640 kubelet[2707]: E0916 04:51:50.198570 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.198757 kubelet[2707]: E0916 04:51:50.198726 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.198757 kubelet[2707]: W0916 04:51:50.198755 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.198827 kubelet[2707]: E0916 04:51:50.198771 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.199068 kubelet[2707]: E0916 04:51:50.199045 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.199068 kubelet[2707]: W0916 04:51:50.199060 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.199143 kubelet[2707]: E0916 04:51:50.199075 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.199277 kubelet[2707]: E0916 04:51:50.199257 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.199277 kubelet[2707]: W0916 04:51:50.199271 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.199349 kubelet[2707]: E0916 04:51:50.199284 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.199531 kubelet[2707]: E0916 04:51:50.199510 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.199531 kubelet[2707]: W0916 04:51:50.199524 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.199608 kubelet[2707]: E0916 04:51:50.199539 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.199834 kubelet[2707]: E0916 04:51:50.199813 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.199834 kubelet[2707]: W0916 04:51:50.199826 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.199909 kubelet[2707]: E0916 04:51:50.199843 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.200530 kubelet[2707]: E0916 04:51:50.200508 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.200586 kubelet[2707]: W0916 04:51:50.200525 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.200586 kubelet[2707]: E0916 04:51:50.200557 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.201756 kubelet[2707]: E0916 04:51:50.201716 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.201756 kubelet[2707]: W0916 04:51:50.201741 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.201943 kubelet[2707]: E0916 04:51:50.201855 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.202096 kubelet[2707]: E0916 04:51:50.202075 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.202096 kubelet[2707]: W0916 04:51:50.202093 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.202167 kubelet[2707]: E0916 04:51:50.202112 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.202479 kubelet[2707]: E0916 04:51:50.202395 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.202479 kubelet[2707]: W0916 04:51:50.202409 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.202888 kubelet[2707]: E0916 04:51:50.202421 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.206005 containerd[1571]: time="2025-09-16T04:51:50.205910588Z" level=info msg="connecting to shim c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee" address="unix:///run/containerd/s/3df5ec3487163600acf4ecaa6734aba73890bea72e5ffe0e4b7fa0a1f439780a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:51:50.213125 kubelet[2707]: E0916 04:51:50.213040 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:50.213125 kubelet[2707]: W0916 04:51:50.213063 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:50.213125 kubelet[2707]: E0916 04:51:50.213086 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:50.238610 systemd[1]: Started cri-containerd-c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee.scope - libcontainer container c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee. Sep 16 04:51:50.275978 containerd[1571]: time="2025-09-16T04:51:50.275918420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khhpb,Uid:1e9c6e41-0666-4f0b-a601-ba0f4d6e970f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee\"" Sep 16 04:51:51.670366 kubelet[2707]: E0916 04:51:51.670282 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxpq2" podUID="d6c9282e-1ae2-4573-883a-f016a02e49ed" Sep 16 04:51:51.783116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941493705.mount: Deactivated successfully. Sep 16 04:51:53.669983 kubelet[2707]: E0916 04:51:53.669839 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxpq2" podUID="d6c9282e-1ae2-4573-883a-f016a02e49ed" Sep 16 04:51:53.906786 containerd[1571]: time="2025-09-16T04:51:53.906706181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:53.907605 containerd[1571]: time="2025-09-16T04:51:53.907553246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 16 04:51:53.908840 containerd[1571]: time="2025-09-16T04:51:53.908796459Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:53.911521 containerd[1571]: time="2025-09-16T04:51:53.911254480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:53.912206 containerd[1571]: time="2025-09-16T04:51:53.912161809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.838257788s" Sep 16 04:51:53.912206 containerd[1571]: time="2025-09-16T04:51:53.912195323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 16 04:51:53.913255 containerd[1571]: time="2025-09-16T04:51:53.913215926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:51:53.923738 containerd[1571]: time="2025-09-16T04:51:53.923584782Z" level=info msg="CreateContainer within sandbox \"eaa712f4f7f806ddb4ee47a598eea06e3ac777456ca56de4e8548e3bcc3fc253\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:51:53.935558 containerd[1571]: time="2025-09-16T04:51:53.935497267Z" level=info msg="Container e8515f09bc6650632a4b72d571f292eaa8ac4ea0c07ea5f8b93f547ce2da6cc1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:51:53.947470 containerd[1571]: time="2025-09-16T04:51:53.947411505Z" level=info msg="CreateContainer within sandbox \"eaa712f4f7f806ddb4ee47a598eea06e3ac777456ca56de4e8548e3bcc3fc253\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e8515f09bc6650632a4b72d571f292eaa8ac4ea0c07ea5f8b93f547ce2da6cc1\"" Sep 16 04:51:53.948166 containerd[1571]: time="2025-09-16T04:51:53.948118347Z" level=info msg="StartContainer for \"e8515f09bc6650632a4b72d571f292eaa8ac4ea0c07ea5f8b93f547ce2da6cc1\"" Sep 16 04:51:53.949592 containerd[1571]: time="2025-09-16T04:51:53.949565143Z" level=info msg="connecting to shim e8515f09bc6650632a4b72d571f292eaa8ac4ea0c07ea5f8b93f547ce2da6cc1" address="unix:///run/containerd/s/f07e206c7c8927850f4aca3abb50b7998a34f8299419904161e671405a9364b3" protocol=ttrpc version=3 Sep 16 04:51:53.975624 systemd[1]: Started cri-containerd-e8515f09bc6650632a4b72d571f292eaa8ac4ea0c07ea5f8b93f547ce2da6cc1.scope - libcontainer container e8515f09bc6650632a4b72d571f292eaa8ac4ea0c07ea5f8b93f547ce2da6cc1. Sep 16 04:51:54.041895 containerd[1571]: time="2025-09-16T04:51:54.041829789Z" level=info msg="StartContainer for \"e8515f09bc6650632a4b72d571f292eaa8ac4ea0c07ea5f8b93f547ce2da6cc1\" returns successfully" Sep 16 04:51:54.772768 kubelet[2707]: E0916 04:51:54.772075 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:54.817321 kubelet[2707]: E0916 04:51:54.817262 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.817321 kubelet[2707]: W0916 04:51:54.817297 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.817321 kubelet[2707]: E0916 04:51:54.817327 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.817695 kubelet[2707]: E0916 04:51:54.817674 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.817695 kubelet[2707]: W0916 04:51:54.817691 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.817695 kubelet[2707]: E0916 04:51:54.817703 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.817946 kubelet[2707]: E0916 04:51:54.817918 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.817946 kubelet[2707]: W0916 04:51:54.817933 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.817946 kubelet[2707]: E0916 04:51:54.817943 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.818193 kubelet[2707]: E0916 04:51:54.818175 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.818193 kubelet[2707]: W0916 04:51:54.818188 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.818264 kubelet[2707]: E0916 04:51:54.818199 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.818484 kubelet[2707]: E0916 04:51:54.818412 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.818484 kubelet[2707]: W0916 04:51:54.818424 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.818484 kubelet[2707]: E0916 04:51:54.818459 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.818790 kubelet[2707]: E0916 04:51:54.818741 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.818790 kubelet[2707]: W0916 04:51:54.818774 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.818790 kubelet[2707]: E0916 04:51:54.818806 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.819122 kubelet[2707]: E0916 04:51:54.819099 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.819122 kubelet[2707]: W0916 04:51:54.819117 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.819191 kubelet[2707]: E0916 04:51:54.819132 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.819393 kubelet[2707]: E0916 04:51:54.819369 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.819393 kubelet[2707]: W0916 04:51:54.819384 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.819564 kubelet[2707]: E0916 04:51:54.819402 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.819737 kubelet[2707]: E0916 04:51:54.819706 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.819737 kubelet[2707]: W0916 04:51:54.819721 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.819737 kubelet[2707]: E0916 04:51:54.819732 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.819980 kubelet[2707]: E0916 04:51:54.819963 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.819980 kubelet[2707]: W0916 04:51:54.819976 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.820029 kubelet[2707]: E0916 04:51:54.819986 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.820391 kubelet[2707]: E0916 04:51:54.820344 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.820498 kubelet[2707]: W0916 04:51:54.820391 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.820498 kubelet[2707]: E0916 04:51:54.820427 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.820801 kubelet[2707]: E0916 04:51:54.820782 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.820801 kubelet[2707]: W0916 04:51:54.820797 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.820884 kubelet[2707]: E0916 04:51:54.820810 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.821055 kubelet[2707]: E0916 04:51:54.821022 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.821055 kubelet[2707]: W0916 04:51:54.821041 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.821055 kubelet[2707]: E0916 04:51:54.821052 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.821257 kubelet[2707]: E0916 04:51:54.821240 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.821257 kubelet[2707]: W0916 04:51:54.821253 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.821329 kubelet[2707]: E0916 04:51:54.821264 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.821527 kubelet[2707]: E0916 04:51:54.821506 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.821527 kubelet[2707]: W0916 04:51:54.821521 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.821607 kubelet[2707]: E0916 04:51:54.821532 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.832283 kubelet[2707]: E0916 04:51:54.832227 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.832283 kubelet[2707]: W0916 04:51:54.832256 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.832501 kubelet[2707]: E0916 04:51:54.832310 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.832629 kubelet[2707]: E0916 04:51:54.832604 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.832629 kubelet[2707]: W0916 04:51:54.832619 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.832704 kubelet[2707]: E0916 04:51:54.832634 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.832936 kubelet[2707]: E0916 04:51:54.832904 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.832936 kubelet[2707]: W0916 04:51:54.832928 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.833000 kubelet[2707]: E0916 04:51:54.832948 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.833212 kubelet[2707]: E0916 04:51:54.833186 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.833212 kubelet[2707]: W0916 04:51:54.833202 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.833267 kubelet[2707]: E0916 04:51:54.833219 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.833471 kubelet[2707]: E0916 04:51:54.833452 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.833471 kubelet[2707]: W0916 04:51:54.833467 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.833530 kubelet[2707]: E0916 04:51:54.833484 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.833732 kubelet[2707]: E0916 04:51:54.833711 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.833732 kubelet[2707]: W0916 04:51:54.833727 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.833811 kubelet[2707]: E0916 04:51:54.833743 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.833981 kubelet[2707]: E0916 04:51:54.833961 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.833981 kubelet[2707]: W0916 04:51:54.833974 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.834043 kubelet[2707]: E0916 04:51:54.833991 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.834308 kubelet[2707]: E0916 04:51:54.834271 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.834308 kubelet[2707]: W0916 04:51:54.834294 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.834366 kubelet[2707]: E0916 04:51:54.834314 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.834581 kubelet[2707]: E0916 04:51:54.834550 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.834581 kubelet[2707]: W0916 04:51:54.834569 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.834641 kubelet[2707]: E0916 04:51:54.834587 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.835170 kubelet[2707]: E0916 04:51:54.835149 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.835170 kubelet[2707]: W0916 04:51:54.835166 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.835253 kubelet[2707]: E0916 04:51:54.835187 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.835478 kubelet[2707]: E0916 04:51:54.835458 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.835478 kubelet[2707]: W0916 04:51:54.835474 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.835590 kubelet[2707]: E0916 04:51:54.835558 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.835784 kubelet[2707]: E0916 04:51:54.835767 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.835784 kubelet[2707]: W0916 04:51:54.835781 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.835850 kubelet[2707]: E0916 04:51:54.835812 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.836004 kubelet[2707]: E0916 04:51:54.835987 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.836004 kubelet[2707]: W0916 04:51:54.836001 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.836081 kubelet[2707]: E0916 04:51:54.836018 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.836283 kubelet[2707]: E0916 04:51:54.836267 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.836283 kubelet[2707]: W0916 04:51:54.836281 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.836342 kubelet[2707]: E0916 04:51:54.836296 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.836639 kubelet[2707]: E0916 04:51:54.836586 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.836639 kubelet[2707]: W0916 04:51:54.836618 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.836865 kubelet[2707]: E0916 04:51:54.836666 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.836992 kubelet[2707]: E0916 04:51:54.836960 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.836992 kubelet[2707]: W0916 04:51:54.836972 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.836992 kubelet[2707]: E0916 04:51:54.836982 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.837401 kubelet[2707]: E0916 04:51:54.837374 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.837401 kubelet[2707]: W0916 04:51:54.837392 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.837502 kubelet[2707]: E0916 04:51:54.837413 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:54.837684 kubelet[2707]: E0916 04:51:54.837652 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:51:54.837684 kubelet[2707]: W0916 04:51:54.837681 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:51:54.837748 kubelet[2707]: E0916 04:51:54.837694 2707 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:51:55.329110 containerd[1571]: time="2025-09-16T04:51:55.329036642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:55.329977 containerd[1571]: time="2025-09-16T04:51:55.329947266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 16 04:51:55.331031 containerd[1571]: time="2025-09-16T04:51:55.330972737Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:55.333086 containerd[1571]: time="2025-09-16T04:51:55.333040150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:51:55.333722 containerd[1571]: time="2025-09-16T04:51:55.333687369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.420427881s" Sep 16 04:51:55.333804 containerd[1571]: time="2025-09-16T04:51:55.333725751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 16 04:51:55.335838 containerd[1571]: time="2025-09-16T04:51:55.335805948Z" level=info msg="CreateContainer within sandbox \"c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:51:55.346072 containerd[1571]: time="2025-09-16T04:51:55.346009101Z" level=info msg="Container 5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:51:55.513224 containerd[1571]: time="2025-09-16T04:51:55.513161304Z" level=info msg="CreateContainer within sandbox \"c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7\"" Sep 16 04:51:55.513728 containerd[1571]: time="2025-09-16T04:51:55.513700829Z" level=info msg="StartContainer for \"5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7\"" Sep 16 04:51:55.515358 containerd[1571]: time="2025-09-16T04:51:55.515324346Z" level=info msg="connecting to shim 5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7" address="unix:///run/containerd/s/3df5ec3487163600acf4ecaa6734aba73890bea72e5ffe0e4b7fa0a1f439780a" protocol=ttrpc version=3 Sep 16 04:51:55.539733 systemd[1]: Started cri-containerd-5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7.scope - libcontainer container 5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7. Sep 16 04:51:55.597527 systemd[1]: cri-containerd-5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7.scope: Deactivated successfully. Sep 16 04:51:55.599391 containerd[1571]: time="2025-09-16T04:51:55.599334761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7\" id:\"5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7\" pid:3407 exited_at:{seconds:1757998315 nanos:598729160}" Sep 16 04:51:55.669621 kubelet[2707]: E0916 04:51:55.669474 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxpq2" podUID="d6c9282e-1ae2-4573-883a-f016a02e49ed" Sep 16 04:51:55.815674 containerd[1571]: time="2025-09-16T04:51:55.815538388Z" level=info msg="received exit event container_id:\"5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7\" id:\"5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7\" pid:3407 exited_at:{seconds:1757998315 nanos:598729160}" Sep 16 04:51:55.819165 containerd[1571]: time="2025-09-16T04:51:55.819011678Z" level=info msg="StartContainer for \"5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7\" returns successfully" Sep 16 04:51:55.820356 kubelet[2707]: I0916 04:51:55.820313 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:51:55.820971 kubelet[2707]: E0916 04:51:55.820845 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:51:55.850040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e2f71430cc386e482e62cd25db725cffde531be5d140053aa62747318c5eca7-rootfs.mount: Deactivated successfully. Sep 16 04:51:56.825327 containerd[1571]: time="2025-09-16T04:51:56.825247988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:51:56.849539 kubelet[2707]: I0916 04:51:56.849108 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6ccf8655cb-xmv5g" podStartSLOduration=4.009370359 podStartE2EDuration="7.849001946s" podCreationTimestamp="2025-09-16 04:51:49 +0000 UTC" firstStartedPulling="2025-09-16 04:51:50.073464313 +0000 UTC m=+18.606739689" lastFinishedPulling="2025-09-16 04:51:53.91309592 +0000 UTC m=+22.446371276" observedRunningTime="2025-09-16 04:51:54.785324689 +0000 UTC m=+23.318600055" watchObservedRunningTime="2025-09-16 04:51:56.849001946 +0000 UTC m=+25.382277312" Sep 16 04:51:57.670265 kubelet[2707]: E0916 04:51:57.670181 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxpq2" podUID="d6c9282e-1ae2-4573-883a-f016a02e49ed" Sep 16 04:51:59.670532 kubelet[2707]: E0916 04:51:59.670374 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxpq2" podUID="d6c9282e-1ae2-4573-883a-f016a02e49ed" Sep 16 04:52:00.316700 containerd[1571]: time="2025-09-16T04:52:00.316614666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:00.317492 containerd[1571]: time="2025-09-16T04:52:00.317451660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 16 04:52:00.318669 containerd[1571]: time="2025-09-16T04:52:00.318622513Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:00.321194 containerd[1571]: time="2025-09-16T04:52:00.321140619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:00.321897 containerd[1571]: time="2025-09-16T04:52:00.321855303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.496553283s" Sep 16 04:52:00.321897 containerd[1571]: time="2025-09-16T04:52:00.321892233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 16 04:52:00.323790 containerd[1571]: time="2025-09-16T04:52:00.323735750Z" level=info msg="CreateContainer within sandbox \"c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:52:00.335758 containerd[1571]: time="2025-09-16T04:52:00.335694292Z" level=info msg="Container 39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:00.346360 containerd[1571]: time="2025-09-16T04:52:00.346277777Z" level=info msg="CreateContainer within sandbox \"c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5\"" Sep 16 04:52:00.346948 containerd[1571]: time="2025-09-16T04:52:00.346909926Z" level=info msg="StartContainer for \"39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5\"" Sep 16 04:52:00.348750 containerd[1571]: time="2025-09-16T04:52:00.348717325Z" level=info msg="connecting to shim 39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5" address="unix:///run/containerd/s/3df5ec3487163600acf4ecaa6734aba73890bea72e5ffe0e4b7fa0a1f439780a" protocol=ttrpc version=3 Sep 16 04:52:00.386741 systemd[1]: Started cri-containerd-39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5.scope - libcontainer container 39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5. Sep 16 04:52:00.437982 containerd[1571]: time="2025-09-16T04:52:00.437903919Z" level=info msg="StartContainer for \"39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5\" returns successfully" Sep 16 04:52:01.639088 containerd[1571]: time="2025-09-16T04:52:01.639001449Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:52:01.642794 systemd[1]: cri-containerd-39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5.scope: Deactivated successfully. Sep 16 04:52:01.643314 systemd[1]: cri-containerd-39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5.scope: Consumed 690ms CPU time, 177.4M memory peak, 2.2M read from disk, 171.3M written to disk. Sep 16 04:52:01.643959 containerd[1571]: time="2025-09-16T04:52:01.643913165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5\" id:\"39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5\" pid:3466 exited_at:{seconds:1757998321 nanos:643475152}" Sep 16 04:52:01.643959 containerd[1571]: time="2025-09-16T04:52:01.644120665Z" level=info msg="received exit event container_id:\"39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5\" id:\"39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5\" pid:3466 exited_at:{seconds:1757998321 nanos:643475152}" Sep 16 04:52:01.664260 kubelet[2707]: I0916 04:52:01.664211 2707 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 16 04:52:01.681906 systemd[1]: Created slice kubepods-besteffort-podd6c9282e_1ae2_4573_883a_f016a02e49ed.slice - libcontainer container kubepods-besteffort-podd6c9282e_1ae2_4573_883a_f016a02e49ed.slice. Sep 16 04:52:01.685793 containerd[1571]: time="2025-09-16T04:52:01.685517500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxpq2,Uid:d6c9282e-1ae2-4573-883a-f016a02e49ed,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:01.690008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39c1ae363f915cae5e26fa25bf01edd5fc04be560538e364ebfeff9de1539bf5-rootfs.mount: Deactivated successfully. Sep 16 04:52:01.727378 systemd[1]: Created slice kubepods-burstable-podd3b400d3_6416_42aa_a67c_7ccb47c6ab2d.slice - libcontainer container kubepods-burstable-podd3b400d3_6416_42aa_a67c_7ccb47c6ab2d.slice. Sep 16 04:52:01.738980 systemd[1]: Created slice kubepods-burstable-pod7b6542b1_a4a2_4245_8a25_ea1a658acd1e.slice - libcontainer container kubepods-burstable-pod7b6542b1_a4a2_4245_8a25_ea1a658acd1e.slice. Sep 16 04:52:01.745597 systemd[1]: Created slice kubepods-besteffort-pod7fe8fd24_56e8_47c4_a114_00f71d9be5d1.slice - libcontainer container kubepods-besteffort-pod7fe8fd24_56e8_47c4_a114_00f71d9be5d1.slice. Sep 16 04:52:01.752188 systemd[1]: Created slice kubepods-besteffort-pode82133b7_4f24_4272_9be3_060161a5d703.slice - libcontainer container kubepods-besteffort-pode82133b7_4f24_4272_9be3_060161a5d703.slice. Sep 16 04:52:01.759119 systemd[1]: Created slice kubepods-besteffort-pod8bfda952_8fad_4baa_b0d7_f4854a0c9878.slice - libcontainer container kubepods-besteffort-pod8bfda952_8fad_4baa_b0d7_f4854a0c9878.slice. Sep 16 04:52:01.764445 systemd[1]: Created slice kubepods-besteffort-pod9b6de2c5_9291_4ad7_9014_054fa78d6512.slice - libcontainer container kubepods-besteffort-pod9b6de2c5_9291_4ad7_9014_054fa78d6512.slice. Sep 16 04:52:01.776410 systemd[1]: Created slice kubepods-besteffort-pod10eb362e_b387_4f8c_abd8_be12ccd4980c.slice - libcontainer container kubepods-besteffort-pod10eb362e_b387_4f8c_abd8_be12ccd4980c.slice. Sep 16 04:52:01.784580 kubelet[2707]: I0916 04:52:01.784517 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jkvw\" (UniqueName: \"kubernetes.io/projected/e82133b7-4f24-4272-9be3-060161a5d703-kube-api-access-5jkvw\") pod \"calico-apiserver-847b79dcd8-bz5jp\" (UID: \"e82133b7-4f24-4272-9be3-060161a5d703\") " pod="calico-apiserver/calico-apiserver-847b79dcd8-bz5jp" Sep 16 04:52:01.784580 kubelet[2707]: I0916 04:52:01.784567 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjz7p\" (UniqueName: \"kubernetes.io/projected/7b6542b1-a4a2-4245-8a25-ea1a658acd1e-kube-api-access-vjz7p\") pod \"coredns-7c65d6cfc9-sq727\" (UID: \"7b6542b1-a4a2-4245-8a25-ea1a658acd1e\") " pod="kube-system/coredns-7c65d6cfc9-sq727" Sep 16 04:52:01.784580 kubelet[2707]: I0916 04:52:01.784587 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9969z\" (UniqueName: \"kubernetes.io/projected/8bfda952-8fad-4baa-b0d7-f4854a0c9878-kube-api-access-9969z\") pod \"calico-apiserver-847b79dcd8-sdl2m\" (UID: \"8bfda952-8fad-4baa-b0d7-f4854a0c9878\") " pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" Sep 16 04:52:01.784851 kubelet[2707]: I0916 04:52:01.784604 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b6de2c5-9291-4ad7-9014-054fa78d6512-goldmane-ca-bundle\") pod \"goldmane-7988f88666-ldwjf\" (UID: \"9b6de2c5-9291-4ad7-9014-054fa78d6512\") " pod="calico-system/goldmane-7988f88666-ldwjf" Sep 16 04:52:01.784851 kubelet[2707]: I0916 04:52:01.784621 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9b6de2c5-9291-4ad7-9014-054fa78d6512-goldmane-key-pair\") pod \"goldmane-7988f88666-ldwjf\" (UID: \"9b6de2c5-9291-4ad7-9014-054fa78d6512\") " pod="calico-system/goldmane-7988f88666-ldwjf" Sep 16 04:52:01.784851 kubelet[2707]: I0916 04:52:01.784636 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm7kn\" (UniqueName: \"kubernetes.io/projected/9b6de2c5-9291-4ad7-9014-054fa78d6512-kube-api-access-hm7kn\") pod \"goldmane-7988f88666-ldwjf\" (UID: \"9b6de2c5-9291-4ad7-9014-054fa78d6512\") " pod="calico-system/goldmane-7988f88666-ldwjf" Sep 16 04:52:01.784851 kubelet[2707]: I0916 04:52:01.784653 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drp7x\" (UniqueName: \"kubernetes.io/projected/10eb362e-b387-4f8c-abd8-be12ccd4980c-kube-api-access-drp7x\") pod \"whisker-6697d5b558-4mw6w\" (UID: \"10eb362e-b387-4f8c-abd8-be12ccd4980c\") " pod="calico-system/whisker-6697d5b558-4mw6w" Sep 16 04:52:01.784851 kubelet[2707]: I0916 04:52:01.784667 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8bfda952-8fad-4baa-b0d7-f4854a0c9878-calico-apiserver-certs\") pod \"calico-apiserver-847b79dcd8-sdl2m\" (UID: \"8bfda952-8fad-4baa-b0d7-f4854a0c9878\") " pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" Sep 16 04:52:01.784979 kubelet[2707]: I0916 04:52:01.784682 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3b400d3-6416-42aa-a67c-7ccb47c6ab2d-config-volume\") pod \"coredns-7c65d6cfc9-wntp7\" (UID: \"d3b400d3-6416-42aa-a67c-7ccb47c6ab2d\") " pod="kube-system/coredns-7c65d6cfc9-wntp7" Sep 16 04:52:01.784979 kubelet[2707]: I0916 04:52:01.784697 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzp5\" (UniqueName: \"kubernetes.io/projected/d3b400d3-6416-42aa-a67c-7ccb47c6ab2d-kube-api-access-6bzp5\") pod \"coredns-7c65d6cfc9-wntp7\" (UID: \"d3b400d3-6416-42aa-a67c-7ccb47c6ab2d\") " pod="kube-system/coredns-7c65d6cfc9-wntp7" Sep 16 04:52:01.784979 kubelet[2707]: I0916 04:52:01.784713 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpxpt\" (UniqueName: \"kubernetes.io/projected/7fe8fd24-56e8-47c4-a114-00f71d9be5d1-kube-api-access-qpxpt\") pod \"calico-kube-controllers-5bb9568b7c-pjwqp\" (UID: \"7fe8fd24-56e8-47c4-a114-00f71d9be5d1\") " pod="calico-system/calico-kube-controllers-5bb9568b7c-pjwqp" Sep 16 04:52:01.784979 kubelet[2707]: I0916 04:52:01.784736 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-backend-key-pair\") pod \"whisker-6697d5b558-4mw6w\" (UID: \"10eb362e-b387-4f8c-abd8-be12ccd4980c\") " pod="calico-system/whisker-6697d5b558-4mw6w" Sep 16 04:52:01.784979 kubelet[2707]: I0916 04:52:01.784751 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e82133b7-4f24-4272-9be3-060161a5d703-calico-apiserver-certs\") pod \"calico-apiserver-847b79dcd8-bz5jp\" (UID: \"e82133b7-4f24-4272-9be3-060161a5d703\") " pod="calico-apiserver/calico-apiserver-847b79dcd8-bz5jp" Sep 16 04:52:01.785097 kubelet[2707]: I0916 04:52:01.784766 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6de2c5-9291-4ad7-9014-054fa78d6512-config\") pod \"goldmane-7988f88666-ldwjf\" (UID: \"9b6de2c5-9291-4ad7-9014-054fa78d6512\") " pod="calico-system/goldmane-7988f88666-ldwjf" Sep 16 04:52:01.785097 kubelet[2707]: I0916 04:52:01.784781 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fe8fd24-56e8-47c4-a114-00f71d9be5d1-tigera-ca-bundle\") pod \"calico-kube-controllers-5bb9568b7c-pjwqp\" (UID: \"7fe8fd24-56e8-47c4-a114-00f71d9be5d1\") " pod="calico-system/calico-kube-controllers-5bb9568b7c-pjwqp" Sep 16 04:52:01.785097 kubelet[2707]: I0916 04:52:01.784797 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b6542b1-a4a2-4245-8a25-ea1a658acd1e-config-volume\") pod \"coredns-7c65d6cfc9-sq727\" (UID: \"7b6542b1-a4a2-4245-8a25-ea1a658acd1e\") " pod="kube-system/coredns-7c65d6cfc9-sq727" Sep 16 04:52:01.785097 kubelet[2707]: I0916 04:52:01.784816 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-ca-bundle\") pod \"whisker-6697d5b558-4mw6w\" (UID: \"10eb362e-b387-4f8c-abd8-be12ccd4980c\") " pod="calico-system/whisker-6697d5b558-4mw6w" Sep 16 04:52:02.036622 kubelet[2707]: E0916 04:52:02.036591 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:02.037387 containerd[1571]: time="2025-09-16T04:52:02.037328432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wntp7,Uid:d3b400d3-6416-42aa-a67c-7ccb47c6ab2d,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:02.042234 kubelet[2707]: E0916 04:52:02.042192 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:02.042724 containerd[1571]: time="2025-09-16T04:52:02.042691795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sq727,Uid:7b6542b1-a4a2-4245-8a25-ea1a658acd1e,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:02.050089 containerd[1571]: time="2025-09-16T04:52:02.050025003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb9568b7c-pjwqp,Uid:7fe8fd24-56e8-47c4-a114-00f71d9be5d1,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:02.056773 containerd[1571]: time="2025-09-16T04:52:02.056709190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-bz5jp,Uid:e82133b7-4f24-4272-9be3-060161a5d703,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:52:02.065034 containerd[1571]: time="2025-09-16T04:52:02.064974078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-sdl2m,Uid:8bfda952-8fad-4baa-b0d7-f4854a0c9878,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:52:02.071266 containerd[1571]: time="2025-09-16T04:52:02.071072956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ldwjf,Uid:9b6de2c5-9291-4ad7-9014-054fa78d6512,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:02.085224 containerd[1571]: time="2025-09-16T04:52:02.085140324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6697d5b558-4mw6w,Uid:10eb362e-b387-4f8c-abd8-be12ccd4980c,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:02.186942 containerd[1571]: time="2025-09-16T04:52:02.186866270Z" level=error msg="Failed to destroy network for sandbox \"2e313994480a923c66bce56c4910edffdda3557d8d28c618d546e96538d6c414\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.187541 containerd[1571]: time="2025-09-16T04:52:02.186865248Z" level=error msg="Failed to destroy network for sandbox \"32275ec6b87edff5fb5a3633aae4385a5cb0f990ed7b5ff9b1076995da617363\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.193045 containerd[1571]: time="2025-09-16T04:52:02.192968913Z" level=error msg="Failed to destroy network for sandbox \"90d07595681470728cbf977f18558af0aa4cae9c18f194aeb2f5ac0bf8b8b2b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.197748 containerd[1571]: time="2025-09-16T04:52:02.197700038Z" level=error msg="Failed to destroy network for sandbox \"a6672259023d51374eb2db2de68dbd4263f89905d37e70b80ff8001441d16dc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.207616 containerd[1571]: time="2025-09-16T04:52:02.207548644Z" level=error msg="Failed to destroy network for sandbox \"269bdaa60b23966060f6c71a6daf508346d286dc30036437c8219d107b5c197c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.213871 containerd[1571]: time="2025-09-16T04:52:02.213814846Z" level=error msg="Failed to destroy network for sandbox \"d2b17d967344a81cc405d57d8ccbdb41b7b7825f49f0b2607bbcab03d3df7efc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.290377 containerd[1571]: time="2025-09-16T04:52:02.290187485Z" level=error msg="Failed to destroy network for sandbox \"395ac5e0820cabd0844250100f08ac2bf6b921cf6fad834db6105a4f39e86720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.470101 containerd[1571]: time="2025-09-16T04:52:02.470003136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxpq2,Uid:d6c9282e-1ae2-4573-883a-f016a02e49ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32275ec6b87edff5fb5a3633aae4385a5cb0f990ed7b5ff9b1076995da617363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.477942 kubelet[2707]: E0916 04:52:02.477870 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32275ec6b87edff5fb5a3633aae4385a5cb0f990ed7b5ff9b1076995da617363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.478125 kubelet[2707]: E0916 04:52:02.477974 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32275ec6b87edff5fb5a3633aae4385a5cb0f990ed7b5ff9b1076995da617363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mxpq2" Sep 16 04:52:02.478125 kubelet[2707]: E0916 04:52:02.478003 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32275ec6b87edff5fb5a3633aae4385a5cb0f990ed7b5ff9b1076995da617363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mxpq2" Sep 16 04:52:02.478125 kubelet[2707]: E0916 04:52:02.478067 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mxpq2_calico-system(d6c9282e-1ae2-4573-883a-f016a02e49ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mxpq2_calico-system(d6c9282e-1ae2-4573-883a-f016a02e49ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32275ec6b87edff5fb5a3633aae4385a5cb0f990ed7b5ff9b1076995da617363\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mxpq2" podUID="d6c9282e-1ae2-4573-883a-f016a02e49ed" Sep 16 04:52:02.484496 containerd[1571]: time="2025-09-16T04:52:02.484394073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-bz5jp,Uid:e82133b7-4f24-4272-9be3-060161a5d703,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e313994480a923c66bce56c4910edffdda3557d8d28c618d546e96538d6c414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.486134 kubelet[2707]: E0916 04:52:02.484762 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e313994480a923c66bce56c4910edffdda3557d8d28c618d546e96538d6c414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.486134 kubelet[2707]: E0916 04:52:02.484824 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e313994480a923c66bce56c4910edffdda3557d8d28c618d546e96538d6c414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b79dcd8-bz5jp" Sep 16 04:52:02.486134 kubelet[2707]: E0916 04:52:02.484845 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e313994480a923c66bce56c4910edffdda3557d8d28c618d546e96538d6c414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b79dcd8-bz5jp" Sep 16 04:52:02.486426 kubelet[2707]: E0916 04:52:02.484898 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847b79dcd8-bz5jp_calico-apiserver(e82133b7-4f24-4272-9be3-060161a5d703)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847b79dcd8-bz5jp_calico-apiserver(e82133b7-4f24-4272-9be3-060161a5d703)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e313994480a923c66bce56c4910edffdda3557d8d28c618d546e96538d6c414\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847b79dcd8-bz5jp" podUID="e82133b7-4f24-4272-9be3-060161a5d703" Sep 16 04:52:02.487355 containerd[1571]: time="2025-09-16T04:52:02.487211790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wntp7,Uid:d3b400d3-6416-42aa-a67c-7ccb47c6ab2d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d07595681470728cbf977f18558af0aa4cae9c18f194aeb2f5ac0bf8b8b2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.488021 kubelet[2707]: E0916 04:52:02.487963 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d07595681470728cbf977f18558af0aa4cae9c18f194aeb2f5ac0bf8b8b2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.488413 kubelet[2707]: E0916 04:52:02.488027 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d07595681470728cbf977f18558af0aa4cae9c18f194aeb2f5ac0bf8b8b2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wntp7" Sep 16 04:52:02.488413 kubelet[2707]: E0916 04:52:02.488050 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d07595681470728cbf977f18558af0aa4cae9c18f194aeb2f5ac0bf8b8b2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wntp7" Sep 16 04:52:02.488413 kubelet[2707]: E0916 04:52:02.488101 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wntp7_kube-system(d3b400d3-6416-42aa-a67c-7ccb47c6ab2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wntp7_kube-system(d3b400d3-6416-42aa-a67c-7ccb47c6ab2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90d07595681470728cbf977f18558af0aa4cae9c18f194aeb2f5ac0bf8b8b2b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wntp7" podUID="d3b400d3-6416-42aa-a67c-7ccb47c6ab2d" Sep 16 04:52:02.490076 containerd[1571]: time="2025-09-16T04:52:02.489661476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-sdl2m,Uid:8bfda952-8fad-4baa-b0d7-f4854a0c9878,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6672259023d51374eb2db2de68dbd4263f89905d37e70b80ff8001441d16dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.490949 kubelet[2707]: E0916 04:52:02.490880 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6672259023d51374eb2db2de68dbd4263f89905d37e70b80ff8001441d16dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.491022 kubelet[2707]: E0916 04:52:02.490982 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6672259023d51374eb2db2de68dbd4263f89905d37e70b80ff8001441d16dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" Sep 16 04:52:02.491022 kubelet[2707]: E0916 04:52:02.491011 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6672259023d51374eb2db2de68dbd4263f89905d37e70b80ff8001441d16dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" Sep 16 04:52:02.491099 kubelet[2707]: E0916 04:52:02.491067 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847b79dcd8-sdl2m_calico-apiserver(8bfda952-8fad-4baa-b0d7-f4854a0c9878)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847b79dcd8-sdl2m_calico-apiserver(8bfda952-8fad-4baa-b0d7-f4854a0c9878)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6672259023d51374eb2db2de68dbd4263f89905d37e70b80ff8001441d16dc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" podUID="8bfda952-8fad-4baa-b0d7-f4854a0c9878" Sep 16 04:52:02.491483 containerd[1571]: time="2025-09-16T04:52:02.491389304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb9568b7c-pjwqp,Uid:7fe8fd24-56e8-47c4-a114-00f71d9be5d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"269bdaa60b23966060f6c71a6daf508346d286dc30036437c8219d107b5c197c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.491740 kubelet[2707]: E0916 04:52:02.491708 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269bdaa60b23966060f6c71a6daf508346d286dc30036437c8219d107b5c197c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.491740 kubelet[2707]: E0916 04:52:02.491750 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269bdaa60b23966060f6c71a6daf508346d286dc30036437c8219d107b5c197c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bb9568b7c-pjwqp" Sep 16 04:52:02.491740 kubelet[2707]: E0916 04:52:02.491771 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269bdaa60b23966060f6c71a6daf508346d286dc30036437c8219d107b5c197c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bb9568b7c-pjwqp" Sep 16 04:52:02.491966 kubelet[2707]: E0916 04:52:02.491808 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bb9568b7c-pjwqp_calico-system(7fe8fd24-56e8-47c4-a114-00f71d9be5d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bb9568b7c-pjwqp_calico-system(7fe8fd24-56e8-47c4-a114-00f71d9be5d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"269bdaa60b23966060f6c71a6daf508346d286dc30036437c8219d107b5c197c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bb9568b7c-pjwqp" podUID="7fe8fd24-56e8-47c4-a114-00f71d9be5d1" Sep 16 04:52:02.493041 containerd[1571]: time="2025-09-16T04:52:02.492982780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sq727,Uid:7b6542b1-a4a2-4245-8a25-ea1a658acd1e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b17d967344a81cc405d57d8ccbdb41b7b7825f49f0b2607bbcab03d3df7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.493205 kubelet[2707]: E0916 04:52:02.493168 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b17d967344a81cc405d57d8ccbdb41b7b7825f49f0b2607bbcab03d3df7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.493205 kubelet[2707]: E0916 04:52:02.493218 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b17d967344a81cc405d57d8ccbdb41b7b7825f49f0b2607bbcab03d3df7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sq727" Sep 16 04:52:02.493376 kubelet[2707]: E0916 04:52:02.493239 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b17d967344a81cc405d57d8ccbdb41b7b7825f49f0b2607bbcab03d3df7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sq727" Sep 16 04:52:02.493376 kubelet[2707]: E0916 04:52:02.493283 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sq727_kube-system(7b6542b1-a4a2-4245-8a25-ea1a658acd1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sq727_kube-system(7b6542b1-a4a2-4245-8a25-ea1a658acd1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2b17d967344a81cc405d57d8ccbdb41b7b7825f49f0b2607bbcab03d3df7efc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sq727" podUID="7b6542b1-a4a2-4245-8a25-ea1a658acd1e" Sep 16 04:52:02.494382 containerd[1571]: time="2025-09-16T04:52:02.494308522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ldwjf,Uid:9b6de2c5-9291-4ad7-9014-054fa78d6512,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"395ac5e0820cabd0844250100f08ac2bf6b921cf6fad834db6105a4f39e86720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.494716 kubelet[2707]: E0916 04:52:02.494602 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"395ac5e0820cabd0844250100f08ac2bf6b921cf6fad834db6105a4f39e86720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.494716 kubelet[2707]: E0916 04:52:02.494681 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"395ac5e0820cabd0844250100f08ac2bf6b921cf6fad834db6105a4f39e86720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-ldwjf" Sep 16 04:52:02.494829 kubelet[2707]: E0916 04:52:02.494713 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"395ac5e0820cabd0844250100f08ac2bf6b921cf6fad834db6105a4f39e86720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-ldwjf" Sep 16 04:52:02.494865 kubelet[2707]: E0916 04:52:02.494814 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-ldwjf_calico-system(9b6de2c5-9291-4ad7-9014-054fa78d6512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-ldwjf_calico-system(9b6de2c5-9291-4ad7-9014-054fa78d6512)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"395ac5e0820cabd0844250100f08ac2bf6b921cf6fad834db6105a4f39e86720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-ldwjf" podUID="9b6de2c5-9291-4ad7-9014-054fa78d6512" Sep 16 04:52:02.515352 containerd[1571]: time="2025-09-16T04:52:02.515275403Z" level=error msg="Failed to destroy network for sandbox \"10d7ce3774bde2f506013966163f6fb8e238e7a44e56f24da5fb0b5d990cc758\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.517272 containerd[1571]: time="2025-09-16T04:52:02.517230388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6697d5b558-4mw6w,Uid:10eb362e-b387-4f8c-abd8-be12ccd4980c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d7ce3774bde2f506013966163f6fb8e238e7a44e56f24da5fb0b5d990cc758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.517651 kubelet[2707]: E0916 04:52:02.517582 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d7ce3774bde2f506013966163f6fb8e238e7a44e56f24da5fb0b5d990cc758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:02.517651 kubelet[2707]: E0916 04:52:02.517660 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d7ce3774bde2f506013966163f6fb8e238e7a44e56f24da5fb0b5d990cc758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6697d5b558-4mw6w" Sep 16 04:52:02.517861 kubelet[2707]: E0916 04:52:02.517687 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d7ce3774bde2f506013966163f6fb8e238e7a44e56f24da5fb0b5d990cc758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6697d5b558-4mw6w" Sep 16 04:52:02.517861 kubelet[2707]: E0916 04:52:02.517730 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6697d5b558-4mw6w_calico-system(10eb362e-b387-4f8c-abd8-be12ccd4980c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6697d5b558-4mw6w_calico-system(10eb362e-b387-4f8c-abd8-be12ccd4980c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10d7ce3774bde2f506013966163f6fb8e238e7a44e56f24da5fb0b5d990cc758\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6697d5b558-4mw6w" podUID="10eb362e-b387-4f8c-abd8-be12ccd4980c" Sep 16 04:52:02.693067 systemd[1]: run-netns-cni\x2d0e75f8a3\x2d94f3\x2ddcb4\x2dae56\x2d7eb02b4d5ffa.mount: Deactivated successfully. Sep 16 04:52:02.855775 containerd[1571]: time="2025-09-16T04:52:02.855691394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:52:12.892946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941873034.mount: Deactivated successfully. Sep 16 04:52:13.486330 containerd[1571]: time="2025-09-16T04:52:13.486251625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:13.487242 containerd[1571]: time="2025-09-16T04:52:13.487207209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 16 04:52:13.488372 containerd[1571]: time="2025-09-16T04:52:13.488330999Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:13.496376 containerd[1571]: time="2025-09-16T04:52:13.496064008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:13.496376 containerd[1571]: time="2025-09-16T04:52:13.496348441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.640611963s" Sep 16 04:52:13.496514 containerd[1571]: time="2025-09-16T04:52:13.496493895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 16 04:52:13.508681 containerd[1571]: time="2025-09-16T04:52:13.508631623Z" level=info msg="CreateContainer within sandbox \"c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:52:13.529551 containerd[1571]: time="2025-09-16T04:52:13.529476120Z" level=info msg="Container 87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:13.549825 containerd[1571]: time="2025-09-16T04:52:13.549763299Z" level=info msg="CreateContainer within sandbox \"c288aa068b727afa355abbc8c2ac4bb29f3e69a897266c4f059760ac9c098cee\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0\"" Sep 16 04:52:13.551458 containerd[1571]: time="2025-09-16T04:52:13.550409381Z" level=info msg="StartContainer for \"87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0\"" Sep 16 04:52:13.551860 containerd[1571]: time="2025-09-16T04:52:13.551831712Z" level=info msg="connecting to shim 87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0" address="unix:///run/containerd/s/3df5ec3487163600acf4ecaa6734aba73890bea72e5ffe0e4b7fa0a1f439780a" protocol=ttrpc version=3 Sep 16 04:52:13.570588 systemd[1]: Started cri-containerd-87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0.scope - libcontainer container 87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0. Sep 16 04:52:13.619411 containerd[1571]: time="2025-09-16T04:52:13.619341423Z" level=info msg="StartContainer for \"87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0\" returns successfully" Sep 16 04:52:13.670988 kubelet[2707]: E0916 04:52:13.670762 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:13.672414 containerd[1571]: time="2025-09-16T04:52:13.672130926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sq727,Uid:7b6542b1-a4a2-4245-8a25-ea1a658acd1e,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:13.675851 containerd[1571]: time="2025-09-16T04:52:13.675162757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-sdl2m,Uid:8bfda952-8fad-4baa-b0d7-f4854a0c9878,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:52:13.719471 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:52:13.719602 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:52:13.784959 containerd[1571]: time="2025-09-16T04:52:13.784804401Z" level=error msg="Failed to destroy network for sandbox \"cdf9e8c17c79fed66236f9e2d0d530e1976ea3347ef7b77f5bd23f155e1050c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:13.789822 containerd[1571]: time="2025-09-16T04:52:13.789758232Z" level=error msg="Failed to destroy network for sandbox \"1cbaaf147f447517408c0c26be56300b54759157736dd948df0190ec7b869bb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:14.001101 containerd[1571]: time="2025-09-16T04:52:14.000690470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sq727,Uid:7b6542b1-a4a2-4245-8a25-ea1a658acd1e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf9e8c17c79fed66236f9e2d0d530e1976ea3347ef7b77f5bd23f155e1050c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:14.001585 kubelet[2707]: E0916 04:52:14.001380 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf9e8c17c79fed66236f9e2d0d530e1976ea3347ef7b77f5bd23f155e1050c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:14.001915 kubelet[2707]: E0916 04:52:14.001667 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf9e8c17c79fed66236f9e2d0d530e1976ea3347ef7b77f5bd23f155e1050c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sq727" Sep 16 04:52:14.002090 kubelet[2707]: E0916 04:52:14.001994 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf9e8c17c79fed66236f9e2d0d530e1976ea3347ef7b77f5bd23f155e1050c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sq727" Sep 16 04:52:14.003099 containerd[1571]: time="2025-09-16T04:52:14.002990067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-sdl2m,Uid:8bfda952-8fad-4baa-b0d7-f4854a0c9878,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cbaaf147f447517408c0c26be56300b54759157736dd948df0190ec7b869bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:14.003543 kubelet[2707]: E0916 04:52:14.003348 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cbaaf147f447517408c0c26be56300b54759157736dd948df0190ec7b869bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:52:14.003543 kubelet[2707]: E0916 04:52:14.003400 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cbaaf147f447517408c0c26be56300b54759157736dd948df0190ec7b869bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" Sep 16 04:52:14.003543 kubelet[2707]: E0916 04:52:14.003447 2707 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cbaaf147f447517408c0c26be56300b54759157736dd948df0190ec7b869bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" Sep 16 04:52:14.003678 kubelet[2707]: E0916 04:52:14.003494 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847b79dcd8-sdl2m_calico-apiserver(8bfda952-8fad-4baa-b0d7-f4854a0c9878)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847b79dcd8-sdl2m_calico-apiserver(8bfda952-8fad-4baa-b0d7-f4854a0c9878)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cbaaf147f447517408c0c26be56300b54759157736dd948df0190ec7b869bb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" podUID="8bfda952-8fad-4baa-b0d7-f4854a0c9878" Sep 16 04:52:14.004881 kubelet[2707]: E0916 04:52:14.004837 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sq727_kube-system(7b6542b1-a4a2-4245-8a25-ea1a658acd1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sq727_kube-system(7b6542b1-a4a2-4245-8a25-ea1a658acd1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdf9e8c17c79fed66236f9e2d0d530e1976ea3347ef7b77f5bd23f155e1050c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sq727" podUID="7b6542b1-a4a2-4245-8a25-ea1a658acd1e" Sep 16 04:52:14.043048 kubelet[2707]: I0916 04:52:14.042963 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-khhpb" podStartSLOduration=1.823733722 podStartE2EDuration="25.04294144s" podCreationTimestamp="2025-09-16 04:51:49 +0000 UTC" firstStartedPulling="2025-09-16 04:51:50.27819329 +0000 UTC m=+18.811468656" lastFinishedPulling="2025-09-16 04:52:13.497401018 +0000 UTC m=+42.030676374" observedRunningTime="2025-09-16 04:52:14.030960377 +0000 UTC m=+42.564235763" watchObservedRunningTime="2025-09-16 04:52:14.04294144 +0000 UTC m=+42.576216797" Sep 16 04:52:14.070467 kubelet[2707]: I0916 04:52:14.070394 2707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-ca-bundle\") pod \"10eb362e-b387-4f8c-abd8-be12ccd4980c\" (UID: \"10eb362e-b387-4f8c-abd8-be12ccd4980c\") " Sep 16 04:52:14.070641 kubelet[2707]: I0916 04:52:14.070530 2707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-backend-key-pair\") pod \"10eb362e-b387-4f8c-abd8-be12ccd4980c\" (UID: \"10eb362e-b387-4f8c-abd8-be12ccd4980c\") " Sep 16 04:52:14.070641 kubelet[2707]: I0916 04:52:14.070567 2707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drp7x\" (UniqueName: \"kubernetes.io/projected/10eb362e-b387-4f8c-abd8-be12ccd4980c-kube-api-access-drp7x\") pod \"10eb362e-b387-4f8c-abd8-be12ccd4980c\" (UID: \"10eb362e-b387-4f8c-abd8-be12ccd4980c\") " Sep 16 04:52:14.073453 kubelet[2707]: I0916 04:52:14.073374 2707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "10eb362e-b387-4f8c-abd8-be12ccd4980c" (UID: "10eb362e-b387-4f8c-abd8-be12ccd4980c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 04:52:14.084371 systemd[1]: var-lib-kubelet-pods-10eb362e\x2db387\x2d4f8c\x2dabd8\x2dbe12ccd4980c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddrp7x.mount: Deactivated successfully. Sep 16 04:52:14.085246 systemd[1]: var-lib-kubelet-pods-10eb362e\x2db387\x2d4f8c\x2dabd8\x2dbe12ccd4980c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:52:14.089339 kubelet[2707]: I0916 04:52:14.086561 2707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "10eb362e-b387-4f8c-abd8-be12ccd4980c" (UID: "10eb362e-b387-4f8c-abd8-be12ccd4980c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 16 04:52:14.089339 kubelet[2707]: I0916 04:52:14.089298 2707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10eb362e-b387-4f8c-abd8-be12ccd4980c-kube-api-access-drp7x" (OuterVolumeSpecName: "kube-api-access-drp7x") pod "10eb362e-b387-4f8c-abd8-be12ccd4980c" (UID: "10eb362e-b387-4f8c-abd8-be12ccd4980c"). InnerVolumeSpecName "kube-api-access-drp7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:52:14.172952 kubelet[2707]: I0916 04:52:14.172073 2707 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 16 04:52:14.173660 kubelet[2707]: I0916 04:52:14.173637 2707 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10eb362e-b387-4f8c-abd8-be12ccd4980c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 16 04:52:14.173660 kubelet[2707]: I0916 04:52:14.173659 2707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drp7x\" (UniqueName: \"kubernetes.io/projected/10eb362e-b387-4f8c-abd8-be12ccd4980c-kube-api-access-drp7x\") on node \"localhost\" DevicePath \"\"" Sep 16 04:52:14.278422 containerd[1571]: time="2025-09-16T04:52:14.278365144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0\" id:\"a468b3a3e492617edf2b7f29f639535ec84a397e0a2bb7fc4a872f34e07f9ab7\" pid:3913 exit_status:1 exited_at:{seconds:1757998334 nanos:278042108}" Sep 16 04:52:15.031935 systemd[1]: Removed slice kubepods-besteffort-pod10eb362e_b387_4f8c_abd8_be12ccd4980c.slice - libcontainer container kubepods-besteffort-pod10eb362e_b387_4f8c_abd8_be12ccd4980c.slice. Sep 16 04:52:15.094271 containerd[1571]: time="2025-09-16T04:52:15.094196226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0\" id:\"80303efaa56b89e862dde0507af7b9c9f547169cd3b8d1a306f3f9a613038691\" pid:3949 exit_status:1 exited_at:{seconds:1757998335 nanos:93854535}" Sep 16 04:52:15.219788 systemd[1]: Created slice kubepods-besteffort-pod44b57730_af30_4ae3_91ab_c496b2ee37da.slice - libcontainer container kubepods-besteffort-pod44b57730_af30_4ae3_91ab_c496b2ee37da.slice. Sep 16 04:52:15.282615 kubelet[2707]: I0916 04:52:15.282414 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b57730-af30-4ae3-91ab-c496b2ee37da-whisker-ca-bundle\") pod \"whisker-7d85dfdfc7-ksrcl\" (UID: \"44b57730-af30-4ae3-91ab-c496b2ee37da\") " pod="calico-system/whisker-7d85dfdfc7-ksrcl" Sep 16 04:52:15.282615 kubelet[2707]: I0916 04:52:15.282492 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/44b57730-af30-4ae3-91ab-c496b2ee37da-whisker-backend-key-pair\") pod \"whisker-7d85dfdfc7-ksrcl\" (UID: \"44b57730-af30-4ae3-91ab-c496b2ee37da\") " pod="calico-system/whisker-7d85dfdfc7-ksrcl" Sep 16 04:52:15.282615 kubelet[2707]: I0916 04:52:15.282556 2707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzv8f\" (UniqueName: \"kubernetes.io/projected/44b57730-af30-4ae3-91ab-c496b2ee37da-kube-api-access-bzv8f\") pod \"whisker-7d85dfdfc7-ksrcl\" (UID: \"44b57730-af30-4ae3-91ab-c496b2ee37da\") " pod="calico-system/whisker-7d85dfdfc7-ksrcl" Sep 16 04:52:15.525902 containerd[1571]: time="2025-09-16T04:52:15.525852004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d85dfdfc7-ksrcl,Uid:44b57730-af30-4ae3-91ab-c496b2ee37da,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:15.671267 kubelet[2707]: E0916 04:52:15.671130 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:15.672969 containerd[1571]: time="2025-09-16T04:52:15.672653481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-bz5jp,Uid:e82133b7-4f24-4272-9be3-060161a5d703,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:52:15.673394 containerd[1571]: time="2025-09-16T04:52:15.672693075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb9568b7c-pjwqp,Uid:7fe8fd24-56e8-47c4-a114-00f71d9be5d1,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:15.673905 containerd[1571]: time="2025-09-16T04:52:15.673866878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wntp7,Uid:d3b400d3-6416-42aa-a67c-7ccb47c6ab2d,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:15.674798 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:60394.service - OpenSSH per-connection server daemon (10.0.0.1:60394). Sep 16 04:52:15.676279 kubelet[2707]: I0916 04:52:15.676226 2707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10eb362e-b387-4f8c-abd8-be12ccd4980c" path="/var/lib/kubelet/pods/10eb362e-b387-4f8c-abd8-be12ccd4980c/volumes" Sep 16 04:52:15.760067 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 60394 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:15.761838 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:15.768370 systemd-logind[1507]: New session 8 of user core. Sep 16 04:52:15.778691 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:52:15.857676 systemd-networkd[1466]: cali997fbb2492d: Link UP Sep 16 04:52:15.857914 systemd-networkd[1466]: cali997fbb2492d: Gained carrier Sep 16 04:52:15.878609 containerd[1571]: 2025-09-16 04:52:15.610 [INFO][4065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:52:15.878609 containerd[1571]: 2025-09-16 04:52:15.631 [INFO][4065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0 whisker-7d85dfdfc7- calico-system 44b57730-af30-4ae3-91ab-c496b2ee37da 910 0 2025-09-16 04:52:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d85dfdfc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7d85dfdfc7-ksrcl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali997fbb2492d [] [] }} ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-" Sep 16 04:52:15.878609 containerd[1571]: 2025-09-16 04:52:15.631 [INFO][4065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" Sep 16 04:52:15.878609 containerd[1571]: 2025-09-16 04:52:15.765 [INFO][4079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" HandleID="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Workload="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.766 [INFO][4079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" HandleID="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Workload="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bec30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7d85dfdfc7-ksrcl", "timestamp":"2025-09-16 04:52:15.765145878 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.766 [INFO][4079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.766 [INFO][4079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.766 [INFO][4079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.786 [INFO][4079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" host="localhost" Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.804 [INFO][4079] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.818 [INFO][4079] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.821 [INFO][4079] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.825 [INFO][4079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:15.878881 containerd[1571]: 2025-09-16 04:52:15.825 [INFO][4079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" host="localhost" Sep 16 04:52:15.879113 containerd[1571]: 2025-09-16 04:52:15.827 [INFO][4079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66 Sep 16 04:52:15.879113 containerd[1571]: 2025-09-16 04:52:15.832 [INFO][4079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" host="localhost" Sep 16 04:52:15.879113 containerd[1571]: 2025-09-16 04:52:15.839 [INFO][4079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" host="localhost" Sep 16 04:52:15.879113 containerd[1571]: 2025-09-16 04:52:15.840 [INFO][4079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" host="localhost" Sep 16 04:52:15.879113 containerd[1571]: 2025-09-16 04:52:15.840 [INFO][4079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:15.879113 containerd[1571]: 2025-09-16 04:52:15.841 [INFO][4079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" HandleID="k8s-pod-network.1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Workload="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" Sep 16 04:52:15.879255 containerd[1571]: 2025-09-16 04:52:15.845 [INFO][4065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0", GenerateName:"whisker-7d85dfdfc7-", Namespace:"calico-system", SelfLink:"", UID:"44b57730-af30-4ae3-91ab-c496b2ee37da", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 52, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d85dfdfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7d85dfdfc7-ksrcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali997fbb2492d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:15.879255 containerd[1571]: 2025-09-16 04:52:15.845 [INFO][4065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" Sep 16 04:52:15.879335 containerd[1571]: 2025-09-16 04:52:15.845 [INFO][4065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali997fbb2492d ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" Sep 16 04:52:15.879335 containerd[1571]: 2025-09-16 04:52:15.857 [INFO][4065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" Sep 16 04:52:15.879377 containerd[1571]: 2025-09-16 04:52:15.858 [INFO][4065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0", GenerateName:"whisker-7d85dfdfc7-", Namespace:"calico-system", SelfLink:"", UID:"44b57730-af30-4ae3-91ab-c496b2ee37da", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 52, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d85dfdfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66", Pod:"whisker-7d85dfdfc7-ksrcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali997fbb2492d", MAC:"0e:c0:bc:69:6f:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:15.879464 containerd[1571]: 2025-09-16 04:52:15.875 [INFO][4065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" Namespace="calico-system" Pod="whisker-7d85dfdfc7-ksrcl" WorkloadEndpoint="localhost-k8s-whisker--7d85dfdfc7--ksrcl-eth0" Sep 16 04:52:15.941367 systemd-networkd[1466]: calibb65c55d0b9: Link UP Sep 16 04:52:15.942649 systemd-networkd[1466]: calibb65c55d0b9: Gained carrier Sep 16 04:52:15.955885 containerd[1571]: 2025-09-16 04:52:15.761 [INFO][4093] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:52:15.955885 containerd[1571]: 2025-09-16 04:52:15.779 [INFO][4093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0 calico-apiserver-847b79dcd8- calico-apiserver e82133b7-4f24-4272-9be3-060161a5d703 823 0 2025-09-16 04:51:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847b79dcd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-847b79dcd8-bz5jp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibb65c55d0b9 [] [] }} ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-" Sep 16 04:52:15.955885 containerd[1571]: 2025-09-16 04:52:15.779 [INFO][4093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" Sep 16 04:52:15.955885 containerd[1571]: 2025-09-16 04:52:15.814 [INFO][4138] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" HandleID="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Workload="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.814 [INFO][4138] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" HandleID="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Workload="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021d840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-847b79dcd8-bz5jp", "timestamp":"2025-09-16 04:52:15.814585587 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.814 [INFO][4138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.841 [INFO][4138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.841 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.886 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" host="localhost" Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.902 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.910 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.912 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.916 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:15.956458 containerd[1571]: 2025-09-16 04:52:15.916 [INFO][4138] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" host="localhost" Sep 16 04:52:15.956690 containerd[1571]: 2025-09-16 04:52:15.920 [INFO][4138] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04 Sep 16 04:52:15.956690 containerd[1571]: 2025-09-16 04:52:15.924 [INFO][4138] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" host="localhost" Sep 16 04:52:15.956690 containerd[1571]: 2025-09-16 04:52:15.931 [INFO][4138] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" host="localhost" Sep 16 04:52:15.956690 containerd[1571]: 2025-09-16 04:52:15.931 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" host="localhost" Sep 16 04:52:15.956690 containerd[1571]: 2025-09-16 04:52:15.931 [INFO][4138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:15.956690 containerd[1571]: 2025-09-16 04:52:15.931 [INFO][4138] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" HandleID="k8s-pod-network.2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Workload="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" Sep 16 04:52:15.956815 containerd[1571]: 2025-09-16 04:52:15.935 [INFO][4093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0", GenerateName:"calico-apiserver-847b79dcd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e82133b7-4f24-4272-9be3-060161a5d703", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b79dcd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-847b79dcd8-bz5jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb65c55d0b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:15.956870 containerd[1571]: 2025-09-16 04:52:15.935 [INFO][4093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" Sep 16 04:52:15.956870 containerd[1571]: 2025-09-16 04:52:15.935 [INFO][4093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb65c55d0b9 ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" Sep 16 04:52:15.956870 containerd[1571]: 2025-09-16 04:52:15.941 [INFO][4093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" Sep 16 04:52:15.956932 containerd[1571]: 2025-09-16 04:52:15.942 [INFO][4093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0", GenerateName:"calico-apiserver-847b79dcd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e82133b7-4f24-4272-9be3-060161a5d703", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b79dcd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04", Pod:"calico-apiserver-847b79dcd8-bz5jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb65c55d0b9", MAC:"a2:fb:72:03:9c:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:15.956995 containerd[1571]: 2025-09-16 04:52:15.952 [INFO][4093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-bz5jp" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--bz5jp-eth0" Sep 16 04:52:15.970165 sshd[4136]: Connection closed by 10.0.0.1 port 60394 Sep 16 04:52:15.970541 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:15.976099 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:60394.service: Deactivated successfully. Sep 16 04:52:15.978575 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:52:15.979699 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:52:15.981189 systemd-logind[1507]: Removed session 8. Sep 16 04:52:16.049197 systemd-networkd[1466]: cali8347f8648c1: Link UP Sep 16 04:52:16.049674 systemd-networkd[1466]: cali8347f8648c1: Gained carrier Sep 16 04:52:16.068605 containerd[1571]: time="2025-09-16T04:52:16.068549331Z" level=info msg="connecting to shim 2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04" address="unix:///run/containerd/s/2dc16b6377f204312fd440b92e5f3522b904fcf1ef2d681ceb7e03de46d49fa8" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:16.070309 containerd[1571]: 2025-09-16 04:52:15.777 [INFO][4101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:52:16.070309 containerd[1571]: 2025-09-16 04:52:15.802 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0 calico-kube-controllers-5bb9568b7c- calico-system 7fe8fd24-56e8-47c4-a114-00f71d9be5d1 821 0 2025-09-16 04:51:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bb9568b7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bb9568b7c-pjwqp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8347f8648c1 [] [] }} ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-" Sep 16 04:52:16.070309 containerd[1571]: 2025-09-16 04:52:15.802 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" Sep 16 04:52:16.070309 containerd[1571]: 2025-09-16 04:52:15.850 [INFO][4154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" HandleID="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Workload="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:15.850 [INFO][4154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" HandleID="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Workload="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000258a10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bb9568b7c-pjwqp", "timestamp":"2025-09-16 04:52:15.850612748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:15.850 [INFO][4154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:15.932 [INFO][4154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:15.932 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:15.987 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" host="localhost" Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:16.003 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:16.013 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:16.017 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:16.021 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:16.070515 containerd[1571]: 2025-09-16 04:52:16.021 [INFO][4154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" host="localhost" Sep 16 04:52:16.070822 containerd[1571]: 2025-09-16 04:52:16.022 [INFO][4154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef Sep 16 04:52:16.070822 containerd[1571]: 2025-09-16 04:52:16.028 [INFO][4154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" host="localhost" Sep 16 04:52:16.070822 containerd[1571]: 2025-09-16 04:52:16.034 [INFO][4154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" host="localhost" Sep 16 04:52:16.070822 containerd[1571]: 2025-09-16 04:52:16.034 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" host="localhost" Sep 16 04:52:16.070822 containerd[1571]: 2025-09-16 04:52:16.034 [INFO][4154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:16.070822 containerd[1571]: 2025-09-16 04:52:16.034 [INFO][4154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" HandleID="k8s-pod-network.1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Workload="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" Sep 16 04:52:16.070943 containerd[1571]: 2025-09-16 04:52:16.047 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0", GenerateName:"calico-kube-controllers-5bb9568b7c-", Namespace:"calico-system", SelfLink:"", UID:"7fe8fd24-56e8-47c4-a114-00f71d9be5d1", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb9568b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bb9568b7c-pjwqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8347f8648c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:16.070993 containerd[1571]: 2025-09-16 04:52:16.047 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" Sep 16 04:52:16.070993 containerd[1571]: 2025-09-16 04:52:16.047 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8347f8648c1 ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" Sep 16 04:52:16.070993 containerd[1571]: 2025-09-16 04:52:16.049 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" Sep 16 04:52:16.071054 containerd[1571]: 2025-09-16 04:52:16.050 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0", GenerateName:"calico-kube-controllers-5bb9568b7c-", Namespace:"calico-system", SelfLink:"", UID:"7fe8fd24-56e8-47c4-a114-00f71d9be5d1", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb9568b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef", Pod:"calico-kube-controllers-5bb9568b7c-pjwqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8347f8648c1", MAC:"b6:4d:96:4a:84:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:16.071101 containerd[1571]: 2025-09-16 04:52:16.065 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" Namespace="calico-system" Pod="calico-kube-controllers-5bb9568b7c-pjwqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb9568b7c--pjwqp-eth0" Sep 16 04:52:16.080719 containerd[1571]: time="2025-09-16T04:52:16.080594290Z" level=info msg="connecting to shim 1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66" address="unix:///run/containerd/s/5f4574d3a6de40f70a4dc037613c7f28744c2c425587aab908300debe324a74e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:16.105675 systemd[1]: Started cri-containerd-2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04.scope - libcontainer container 2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04. Sep 16 04:52:16.108259 containerd[1571]: time="2025-09-16T04:52:16.108186513Z" level=info msg="connecting to shim 1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef" address="unix:///run/containerd/s/bb827c10a1868c8fbc8d185321a225411f2db19fb7c9934d928d6e103b3517f5" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:16.131789 systemd[1]: Started cri-containerd-1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66.scope - libcontainer container 1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66. Sep 16 04:52:16.137622 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:16.142962 systemd-networkd[1466]: calibb197b6eeb9: Link UP Sep 16 04:52:16.143936 systemd-networkd[1466]: calibb197b6eeb9: Gained carrier Sep 16 04:52:16.159832 systemd[1]: Started cri-containerd-1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef.scope - libcontainer container 1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef. Sep 16 04:52:16.163901 containerd[1571]: 2025-09-16 04:52:15.766 [INFO][4096] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:52:16.163901 containerd[1571]: 2025-09-16 04:52:15.793 [INFO][4096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0 coredns-7c65d6cfc9- kube-system d3b400d3-6416-42aa-a67c-7ccb47c6ab2d 809 0 2025-09-16 04:51:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wntp7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb197b6eeb9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-" Sep 16 04:52:16.163901 containerd[1571]: 2025-09-16 04:52:15.793 [INFO][4096] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" Sep 16 04:52:16.163901 containerd[1571]: 2025-09-16 04:52:15.853 [INFO][4147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" HandleID="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Workload="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:15.853 [INFO][4147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" HandleID="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Workload="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b2e70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wntp7", "timestamp":"2025-09-16 04:52:15.853485951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:15.853 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.034 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.035 [INFO][4147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.087 [INFO][4147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" host="localhost" Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.105 [INFO][4147] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.112 [INFO][4147] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.114 [INFO][4147] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.116 [INFO][4147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:16.164084 containerd[1571]: 2025-09-16 04:52:16.116 [INFO][4147] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" host="localhost" Sep 16 04:52:16.164304 containerd[1571]: 2025-09-16 04:52:16.118 [INFO][4147] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be Sep 16 04:52:16.164304 containerd[1571]: 2025-09-16 04:52:16.122 [INFO][4147] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" host="localhost" Sep 16 04:52:16.164304 containerd[1571]: 2025-09-16 04:52:16.132 [INFO][4147] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" host="localhost" Sep 16 04:52:16.164304 containerd[1571]: 2025-09-16 04:52:16.132 [INFO][4147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" host="localhost" Sep 16 04:52:16.164304 containerd[1571]: 2025-09-16 04:52:16.132 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:16.164304 containerd[1571]: 2025-09-16 04:52:16.132 [INFO][4147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" HandleID="k8s-pod-network.37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Workload="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" Sep 16 04:52:16.164426 containerd[1571]: 2025-09-16 04:52:16.137 [INFO][4096] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d3b400d3-6416-42aa-a67c-7ccb47c6ab2d", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wntp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb197b6eeb9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:16.164523 containerd[1571]: 2025-09-16 04:52:16.137 [INFO][4096] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" Sep 16 04:52:16.164523 containerd[1571]: 2025-09-16 04:52:16.137 [INFO][4096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb197b6eeb9 ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" Sep 16 04:52:16.164523 containerd[1571]: 2025-09-16 04:52:16.144 [INFO][4096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" Sep 16 04:52:16.164620 containerd[1571]: 2025-09-16 04:52:16.144 [INFO][4096] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d3b400d3-6416-42aa-a67c-7ccb47c6ab2d", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be", Pod:"coredns-7c65d6cfc9-wntp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb197b6eeb9", MAC:"e2:fa:67:78:0c:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:16.164620 containerd[1571]: 2025-09-16 04:52:16.159 [INFO][4096] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wntp7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wntp7-eth0" Sep 16 04:52:16.170108 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:16.189346 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:16.207303 containerd[1571]: time="2025-09-16T04:52:16.206130911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-bz5jp,Uid:e82133b7-4f24-4272-9be3-060161a5d703,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04\"" Sep 16 04:52:16.208644 containerd[1571]: time="2025-09-16T04:52:16.208592742Z" level=info msg="connecting to shim 37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be" address="unix:///run/containerd/s/1f5e6d0ededb8eaed3f4babadd30518f59765fb30a1a707fe1e8a3355faffb55" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:16.211216 containerd[1571]: time="2025-09-16T04:52:16.210569883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:52:16.246057 containerd[1571]: time="2025-09-16T04:52:16.246007402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d85dfdfc7-ksrcl,Uid:44b57730-af30-4ae3-91ab-c496b2ee37da,Namespace:calico-system,Attempt:0,} returns sandbox id \"1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66\"" Sep 16 04:52:16.248178 containerd[1571]: time="2025-09-16T04:52:16.248133473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb9568b7c-pjwqp,Uid:7fe8fd24-56e8-47c4-a114-00f71d9be5d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef\"" Sep 16 04:52:16.249637 systemd[1]: Started cri-containerd-37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be.scope - libcontainer container 37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be. Sep 16 04:52:16.265394 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:16.294950 containerd[1571]: time="2025-09-16T04:52:16.294892793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wntp7,Uid:d3b400d3-6416-42aa-a67c-7ccb47c6ab2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be\"" Sep 16 04:52:16.295570 kubelet[2707]: E0916 04:52:16.295548 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:16.297833 containerd[1571]: time="2025-09-16T04:52:16.297778590Z" level=info msg="CreateContainer within sandbox \"37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:52:16.322227 containerd[1571]: time="2025-09-16T04:52:16.322149917Z" level=info msg="Container 8874c2152937bb73b63ac056ee5287f625ee2ed5f1325ff6f0ab15d9dd255d87: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:16.328609 containerd[1571]: time="2025-09-16T04:52:16.328570539Z" level=info msg="CreateContainer within sandbox \"37b20779cb901acf2bc6902aedc34d1ad88191efa5daffce23c95a6422bad9be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8874c2152937bb73b63ac056ee5287f625ee2ed5f1325ff6f0ab15d9dd255d87\"" Sep 16 04:52:16.330173 containerd[1571]: time="2025-09-16T04:52:16.329099161Z" level=info msg="StartContainer for \"8874c2152937bb73b63ac056ee5287f625ee2ed5f1325ff6f0ab15d9dd255d87\"" Sep 16 04:52:16.330173 containerd[1571]: time="2025-09-16T04:52:16.329938737Z" level=info msg="connecting to shim 8874c2152937bb73b63ac056ee5287f625ee2ed5f1325ff6f0ab15d9dd255d87" address="unix:///run/containerd/s/1f5e6d0ededb8eaed3f4babadd30518f59765fb30a1a707fe1e8a3355faffb55" protocol=ttrpc version=3 Sep 16 04:52:16.340405 kubelet[2707]: I0916 04:52:16.340351 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:52:16.342520 kubelet[2707]: E0916 04:52:16.341369 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:16.354845 systemd[1]: Started cri-containerd-8874c2152937bb73b63ac056ee5287f625ee2ed5f1325ff6f0ab15d9dd255d87.scope - libcontainer container 8874c2152937bb73b63ac056ee5287f625ee2ed5f1325ff6f0ab15d9dd255d87. Sep 16 04:52:16.402055 containerd[1571]: time="2025-09-16T04:52:16.402007725Z" level=info msg="StartContainer for \"8874c2152937bb73b63ac056ee5287f625ee2ed5f1325ff6f0ab15d9dd255d87\" returns successfully" Sep 16 04:52:16.670394 containerd[1571]: time="2025-09-16T04:52:16.670327691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ldwjf,Uid:9b6de2c5-9291-4ad7-9014-054fa78d6512,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:16.807314 systemd-networkd[1466]: caliee4fcb9e887: Link UP Sep 16 04:52:16.810556 systemd-networkd[1466]: caliee4fcb9e887: Gained carrier Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.705 [INFO][4447] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.718 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--ldwjf-eth0 goldmane-7988f88666- calico-system 9b6de2c5-9291-4ad7-9014-054fa78d6512 814 0 2025-09-16 04:51:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-ldwjf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliee4fcb9e887 [] [] }} ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.719 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.762 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" HandleID="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Workload="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.762 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" HandleID="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Workload="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000515340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-ldwjf", "timestamp":"2025-09-16 04:52:16.762074115 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.762 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.762 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.762 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.769 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.774 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.779 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.782 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.786 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.786 [INFO][4469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.788 [INFO][4469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654 Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.793 [INFO][4469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.800 [INFO][4469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.800 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" host="localhost" Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.800 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:16.825546 containerd[1571]: 2025-09-16 04:52:16.800 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" HandleID="k8s-pod-network.470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Workload="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" Sep 16 04:52:16.826366 containerd[1571]: 2025-09-16 04:52:16.804 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ldwjf-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9b6de2c5-9291-4ad7-9014-054fa78d6512", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-ldwjf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee4fcb9e887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:16.826366 containerd[1571]: 2025-09-16 04:52:16.804 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" Sep 16 04:52:16.826366 containerd[1571]: 2025-09-16 04:52:16.804 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee4fcb9e887 ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" Sep 16 04:52:16.826366 containerd[1571]: 2025-09-16 04:52:16.811 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" Sep 16 04:52:16.826366 containerd[1571]: 2025-09-16 04:52:16.811 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ldwjf-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9b6de2c5-9291-4ad7-9014-054fa78d6512", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654", Pod:"goldmane-7988f88666-ldwjf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee4fcb9e887", MAC:"d6:c2:c1:80:41:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:16.826366 containerd[1571]: 2025-09-16 04:52:16.821 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" Namespace="calico-system" Pod="goldmane-7988f88666-ldwjf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ldwjf-eth0" Sep 16 04:52:16.853204 containerd[1571]: time="2025-09-16T04:52:16.853148769Z" level=info msg="connecting to shim 470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654" address="unix:///run/containerd/s/5d19781697759154aa8e2536c4103fcd7741a4adb47ac95fb4d5f857a282574e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:16.889617 systemd[1]: Started cri-containerd-470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654.scope - libcontainer container 470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654. Sep 16 04:52:16.903988 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:16.939815 containerd[1571]: time="2025-09-16T04:52:16.939681217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ldwjf,Uid:9b6de2c5-9291-4ad7-9014-054fa78d6512,Namespace:calico-system,Attempt:0,} returns sandbox id \"470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654\"" Sep 16 04:52:17.014024 kubelet[2707]: E0916 04:52:17.013974 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:17.034844 kubelet[2707]: I0916 04:52:17.034764 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wntp7" podStartSLOduration=41.034744593 podStartE2EDuration="41.034744593s" podCreationTimestamp="2025-09-16 04:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:52:17.032013216 +0000 UTC m=+45.565288572" watchObservedRunningTime="2025-09-16 04:52:17.034744593 +0000 UTC m=+45.568019949" Sep 16 04:52:17.053103 kubelet[2707]: E0916 04:52:17.053058 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:17.104725 systemd-networkd[1466]: vxlan.calico: Link UP Sep 16 04:52:17.104737 systemd-networkd[1466]: vxlan.calico: Gained carrier Sep 16 04:52:17.671521 containerd[1571]: time="2025-09-16T04:52:17.670795774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxpq2,Uid:d6c9282e-1ae2-4573-883a-f016a02e49ed,Namespace:calico-system,Attempt:0,}" Sep 16 04:52:17.743777 systemd-networkd[1466]: cali8347f8648c1: Gained IPv6LL Sep 16 04:52:17.808622 systemd-networkd[1466]: calibb65c55d0b9: Gained IPv6LL Sep 16 04:52:17.809032 systemd-networkd[1466]: cali997fbb2492d: Gained IPv6LL Sep 16 04:52:17.819333 systemd-networkd[1466]: caliad6f3cd293b: Link UP Sep 16 04:52:17.820659 systemd-networkd[1466]: caliad6f3cd293b: Gained carrier Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.729 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mxpq2-eth0 csi-node-driver- calico-system d6c9282e-1ae2-4573-883a-f016a02e49ed 702 0 2025-09-16 04:51:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mxpq2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliad6f3cd293b [] [] }} ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.730 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-eth0" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.764 [INFO][4648] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" HandleID="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Workload="localhost-k8s-csi--node--driver--mxpq2-eth0" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.764 [INFO][4648] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" HandleID="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Workload="localhost-k8s-csi--node--driver--mxpq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mxpq2", "timestamp":"2025-09-16 04:52:17.764665402 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.765 [INFO][4648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.765 [INFO][4648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.765 [INFO][4648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.778 [INFO][4648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.787 [INFO][4648] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.793 [INFO][4648] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.795 [INFO][4648] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.798 [INFO][4648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.798 [INFO][4648] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.799 [INFO][4648] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85 Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.804 [INFO][4648] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.811 [INFO][4648] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.811 [INFO][4648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" host="localhost" Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.811 [INFO][4648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:17.840462 containerd[1571]: 2025-09-16 04:52:17.811 [INFO][4648] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" HandleID="k8s-pod-network.0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Workload="localhost-k8s-csi--node--driver--mxpq2-eth0" Sep 16 04:52:17.841291 containerd[1571]: 2025-09-16 04:52:17.816 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mxpq2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6c9282e-1ae2-4573-883a-f016a02e49ed", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mxpq2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad6f3cd293b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:17.841291 containerd[1571]: 2025-09-16 04:52:17.816 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-eth0" Sep 16 04:52:17.841291 containerd[1571]: 2025-09-16 04:52:17.816 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad6f3cd293b ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-eth0" Sep 16 04:52:17.841291 containerd[1571]: 2025-09-16 04:52:17.821 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-eth0" Sep 16 04:52:17.841291 containerd[1571]: 2025-09-16 04:52:17.821 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mxpq2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6c9282e-1ae2-4573-883a-f016a02e49ed", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85", Pod:"csi-node-driver-mxpq2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad6f3cd293b", MAC:"fa:75:6b:fd:33:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:17.841291 containerd[1571]: 2025-09-16 04:52:17.832 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" Namespace="calico-system" Pod="csi-node-driver-mxpq2" WorkloadEndpoint="localhost-k8s-csi--node--driver--mxpq2-eth0" Sep 16 04:52:17.873030 containerd[1571]: time="2025-09-16T04:52:17.872535076Z" level=info msg="connecting to shim 0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85" address="unix:///run/containerd/s/5aedfcb83ec729e9734e1dff55ac1f893184a1c06e5bcfb9b3f085e34333cd52" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:17.900724 systemd[1]: Started cri-containerd-0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85.scope - libcontainer container 0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85. Sep 16 04:52:17.921113 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:17.942092 containerd[1571]: time="2025-09-16T04:52:17.941867790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxpq2,Uid:d6c9282e-1ae2-4573-883a-f016a02e49ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85\"" Sep 16 04:52:18.056283 kubelet[2707]: E0916 04:52:18.056230 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:18.127624 systemd-networkd[1466]: calibb197b6eeb9: Gained IPv6LL Sep 16 04:52:18.128592 systemd-networkd[1466]: caliee4fcb9e887: Gained IPv6LL Sep 16 04:52:18.639678 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL Sep 16 04:52:19.057879 kubelet[2707]: E0916 04:52:19.057828 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:19.087987 systemd-networkd[1466]: caliad6f3cd293b: Gained IPv6LL Sep 16 04:52:19.100192 containerd[1571]: time="2025-09-16T04:52:19.100113765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:19.101700 containerd[1571]: time="2025-09-16T04:52:19.100887106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 16 04:52:19.102293 containerd[1571]: time="2025-09-16T04:52:19.102250385Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:19.113613 containerd[1571]: time="2025-09-16T04:52:19.113546584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:19.114472 containerd[1571]: time="2025-09-16T04:52:19.114399244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.903763758s" Sep 16 04:52:19.114472 containerd[1571]: time="2025-09-16T04:52:19.114450089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:52:19.115828 containerd[1571]: time="2025-09-16T04:52:19.115620806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:52:19.116995 containerd[1571]: time="2025-09-16T04:52:19.116958167Z" level=info msg="CreateContainer within sandbox \"2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:52:19.126613 containerd[1571]: time="2025-09-16T04:52:19.126184551Z" level=info msg="Container eeca3bb96f82ac285724dbd684b97eed8a2374bf737cdc52ad8fd87ae6833e45: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:19.137684 containerd[1571]: time="2025-09-16T04:52:19.137630542Z" level=info msg="CreateContainer within sandbox \"2f1fe177917d65770a7ce32e088bcf17d7eeba5f0b2b7c01ccd47f30cd9a4e04\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eeca3bb96f82ac285724dbd684b97eed8a2374bf737cdc52ad8fd87ae6833e45\"" Sep 16 04:52:19.138549 containerd[1571]: time="2025-09-16T04:52:19.138516905Z" level=info msg="StartContainer for \"eeca3bb96f82ac285724dbd684b97eed8a2374bf737cdc52ad8fd87ae6833e45\"" Sep 16 04:52:19.139752 containerd[1571]: time="2025-09-16T04:52:19.139717880Z" level=info msg="connecting to shim eeca3bb96f82ac285724dbd684b97eed8a2374bf737cdc52ad8fd87ae6833e45" address="unix:///run/containerd/s/2dc16b6377f204312fd440b92e5f3522b904fcf1ef2d681ceb7e03de46d49fa8" protocol=ttrpc version=3 Sep 16 04:52:19.167736 systemd[1]: Started cri-containerd-eeca3bb96f82ac285724dbd684b97eed8a2374bf737cdc52ad8fd87ae6833e45.scope - libcontainer container eeca3bb96f82ac285724dbd684b97eed8a2374bf737cdc52ad8fd87ae6833e45. Sep 16 04:52:19.229865 containerd[1571]: time="2025-09-16T04:52:19.229806009Z" level=info msg="StartContainer for \"eeca3bb96f82ac285724dbd684b97eed8a2374bf737cdc52ad8fd87ae6833e45\" returns successfully" Sep 16 04:52:20.075844 kubelet[2707]: I0916 04:52:20.075752 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-847b79dcd8-bz5jp" podStartSLOduration=30.170508148 podStartE2EDuration="33.075732035s" podCreationTimestamp="2025-09-16 04:51:47 +0000 UTC" firstStartedPulling="2025-09-16 04:52:16.210311048 +0000 UTC m=+44.743586394" lastFinishedPulling="2025-09-16 04:52:19.115534925 +0000 UTC m=+47.648810281" observedRunningTime="2025-09-16 04:52:20.075029025 +0000 UTC m=+48.608304381" watchObservedRunningTime="2025-09-16 04:52:20.075732035 +0000 UTC m=+48.609007392" Sep 16 04:52:20.686191 containerd[1571]: time="2025-09-16T04:52:20.686110291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:20.686855 containerd[1571]: time="2025-09-16T04:52:20.686826915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 16 04:52:20.688051 containerd[1571]: time="2025-09-16T04:52:20.688011488Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:20.690026 containerd[1571]: time="2025-09-16T04:52:20.689986234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:20.690652 containerd[1571]: time="2025-09-16T04:52:20.690599906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.574937813s" Sep 16 04:52:20.690652 containerd[1571]: time="2025-09-16T04:52:20.690639931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 16 04:52:20.691701 containerd[1571]: time="2025-09-16T04:52:20.691654194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:52:20.692932 containerd[1571]: time="2025-09-16T04:52:20.692902646Z" level=info msg="CreateContainer within sandbox \"1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:52:20.702236 containerd[1571]: time="2025-09-16T04:52:20.702194143Z" level=info msg="Container 77ab1c2a0dfa2eff014e0901edb1674792c1cd059c887acb8c83c322c8d01467: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:20.721314 containerd[1571]: time="2025-09-16T04:52:20.721243662Z" level=info msg="CreateContainer within sandbox \"1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"77ab1c2a0dfa2eff014e0901edb1674792c1cd059c887acb8c83c322c8d01467\"" Sep 16 04:52:20.722230 containerd[1571]: time="2025-09-16T04:52:20.722178826Z" level=info msg="StartContainer for \"77ab1c2a0dfa2eff014e0901edb1674792c1cd059c887acb8c83c322c8d01467\"" Sep 16 04:52:20.723244 containerd[1571]: time="2025-09-16T04:52:20.723207918Z" level=info msg="connecting to shim 77ab1c2a0dfa2eff014e0901edb1674792c1cd059c887acb8c83c322c8d01467" address="unix:///run/containerd/s/5f4574d3a6de40f70a4dc037613c7f28744c2c425587aab908300debe324a74e" protocol=ttrpc version=3 Sep 16 04:52:20.742619 systemd[1]: Started cri-containerd-77ab1c2a0dfa2eff014e0901edb1674792c1cd059c887acb8c83c322c8d01467.scope - libcontainer container 77ab1c2a0dfa2eff014e0901edb1674792c1cd059c887acb8c83c322c8d01467. Sep 16 04:52:20.794093 containerd[1571]: time="2025-09-16T04:52:20.794037908Z" level=info msg="StartContainer for \"77ab1c2a0dfa2eff014e0901edb1674792c1cd059c887acb8c83c322c8d01467\" returns successfully" Sep 16 04:52:20.984308 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:59856.service - OpenSSH per-connection server daemon (10.0.0.1:59856). Sep 16 04:52:21.067190 kubelet[2707]: I0916 04:52:21.067136 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:52:21.069794 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 59856 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:21.071709 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:21.077982 systemd-logind[1507]: New session 9 of user core. Sep 16 04:52:21.088686 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:52:21.234335 sshd[4803]: Connection closed by 10.0.0.1 port 59856 Sep 16 04:52:21.234777 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:21.239997 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:59856.service: Deactivated successfully. Sep 16 04:52:21.242446 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:52:21.243288 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:52:21.244852 systemd-logind[1507]: Removed session 9. Sep 16 04:52:23.427233 containerd[1571]: time="2025-09-16T04:52:23.427146554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:23.428658 containerd[1571]: time="2025-09-16T04:52:23.428600111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 16 04:52:23.429915 containerd[1571]: time="2025-09-16T04:52:23.429845417Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:23.432556 containerd[1571]: time="2025-09-16T04:52:23.432509496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:23.433333 containerd[1571]: time="2025-09-16T04:52:23.433278820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.741587446s" Sep 16 04:52:23.433333 containerd[1571]: time="2025-09-16T04:52:23.433321740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 16 04:52:23.434398 containerd[1571]: time="2025-09-16T04:52:23.434364226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:52:23.444296 containerd[1571]: time="2025-09-16T04:52:23.444238964Z" level=info msg="CreateContainer within sandbox \"1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:52:23.453655 containerd[1571]: time="2025-09-16T04:52:23.453594188Z" level=info msg="Container 93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:23.463295 containerd[1571]: time="2025-09-16T04:52:23.463235027Z" level=info msg="CreateContainer within sandbox \"1114377bf4f1134857287047499398841e0c78715fbe19c6fabbca34c48d52ef\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682\"" Sep 16 04:52:23.464026 containerd[1571]: time="2025-09-16T04:52:23.463878364Z" level=info msg="StartContainer for \"93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682\"" Sep 16 04:52:23.464922 containerd[1571]: time="2025-09-16T04:52:23.464891515Z" level=info msg="connecting to shim 93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682" address="unix:///run/containerd/s/bb827c10a1868c8fbc8d185321a225411f2db19fb7c9934d928d6e103b3517f5" protocol=ttrpc version=3 Sep 16 04:52:23.493744 systemd[1]: Started cri-containerd-93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682.scope - libcontainer container 93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682. Sep 16 04:52:23.598490 containerd[1571]: time="2025-09-16T04:52:23.598415619Z" level=info msg="StartContainer for \"93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682\" returns successfully" Sep 16 04:52:24.099933 kubelet[2707]: I0916 04:52:24.099738 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bb9568b7c-pjwqp" podStartSLOduration=26.915389881 podStartE2EDuration="34.099237581s" podCreationTimestamp="2025-09-16 04:51:50 +0000 UTC" firstStartedPulling="2025-09-16 04:52:16.250382745 +0000 UTC m=+44.783658101" lastFinishedPulling="2025-09-16 04:52:23.434230445 +0000 UTC m=+51.967505801" observedRunningTime="2025-09-16 04:52:24.097958512 +0000 UTC m=+52.631233878" watchObservedRunningTime="2025-09-16 04:52:24.099237581 +0000 UTC m=+52.632512937" Sep 16 04:52:24.141778 containerd[1571]: time="2025-09-16T04:52:24.141718446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682\" id:\"635f80a3d5c65db169d1c20e2d55e4ac38e6f6d63cd2b628bbb2332b5d4c166d\" pid:4890 exited_at:{seconds:1757998344 nanos:141192218}" Sep 16 04:52:24.671021 containerd[1571]: time="2025-09-16T04:52:24.670957172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-sdl2m,Uid:8bfda952-8fad-4baa-b0d7-f4854a0c9878,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:52:24.938008 systemd-networkd[1466]: cali7680da065ed: Link UP Sep 16 04:52:24.938258 systemd-networkd[1466]: cali7680da065ed: Gained carrier Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.864 [INFO][4900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0 calico-apiserver-847b79dcd8- calico-apiserver 8bfda952-8fad-4baa-b0d7-f4854a0c9878 819 0 2025-09-16 04:51:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847b79dcd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-847b79dcd8-sdl2m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7680da065ed [] [] }} ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.864 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.895 [INFO][4916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" HandleID="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Workload="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.895 [INFO][4916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" HandleID="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Workload="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006ca900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-847b79dcd8-sdl2m", "timestamp":"2025-09-16 04:52:24.895710242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.895 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.895 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.896 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.905 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.910 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.913 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.915 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.917 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.917 [INFO][4916] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.919 [INFO][4916] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0 Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.922 [INFO][4916] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.930 [INFO][4916] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.930 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" host="localhost" Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.930 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:24.956820 containerd[1571]: 2025-09-16 04:52:24.930 [INFO][4916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" HandleID="k8s-pod-network.8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Workload="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" Sep 16 04:52:24.957479 containerd[1571]: 2025-09-16 04:52:24.934 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0", GenerateName:"calico-apiserver-847b79dcd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bfda952-8fad-4baa-b0d7-f4854a0c9878", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b79dcd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-847b79dcd8-sdl2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7680da065ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:24.957479 containerd[1571]: 2025-09-16 04:52:24.934 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" Sep 16 04:52:24.957479 containerd[1571]: 2025-09-16 04:52:24.935 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7680da065ed ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" Sep 16 04:52:24.957479 containerd[1571]: 2025-09-16 04:52:24.937 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" Sep 16 04:52:24.957479 containerd[1571]: 2025-09-16 04:52:24.938 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0", GenerateName:"calico-apiserver-847b79dcd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bfda952-8fad-4baa-b0d7-f4854a0c9878", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b79dcd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0", Pod:"calico-apiserver-847b79dcd8-sdl2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7680da065ed", MAC:"6e:f0:95:ad:0c:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:24.957479 containerd[1571]: 2025-09-16 04:52:24.951 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" Namespace="calico-apiserver" Pod="calico-apiserver-847b79dcd8-sdl2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--847b79dcd8--sdl2m-eth0" Sep 16 04:52:25.009767 containerd[1571]: time="2025-09-16T04:52:25.009598002Z" level=info msg="connecting to shim 8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0" address="unix:///run/containerd/s/4691075aaf3e471f5b6a853bf7649db8295dce354865eb1beec07c8e32e9a69a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:25.047734 systemd[1]: Started cri-containerd-8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0.scope - libcontainer container 8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0. Sep 16 04:52:25.069578 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:25.214061 containerd[1571]: time="2025-09-16T04:52:25.213721188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b79dcd8-sdl2m,Uid:8bfda952-8fad-4baa-b0d7-f4854a0c9878,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0\"" Sep 16 04:52:25.217279 containerd[1571]: time="2025-09-16T04:52:25.217230593Z" level=info msg="CreateContainer within sandbox \"8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:52:25.348992 containerd[1571]: time="2025-09-16T04:52:25.348939800Z" level=info msg="Container 82799fe0790651f775c05493b62053777fe26749514fc5c0ab6a1265f10a9dc1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:25.358947 containerd[1571]: time="2025-09-16T04:52:25.358900488Z" level=info msg="CreateContainer within sandbox \"8dcc99c9894040ddb19b05ace50a04795c941a829035adf862369562d4b55cc0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"82799fe0790651f775c05493b62053777fe26749514fc5c0ab6a1265f10a9dc1\"" Sep 16 04:52:25.359501 containerd[1571]: time="2025-09-16T04:52:25.359471510Z" level=info msg="StartContainer for \"82799fe0790651f775c05493b62053777fe26749514fc5c0ab6a1265f10a9dc1\"" Sep 16 04:52:25.360643 containerd[1571]: time="2025-09-16T04:52:25.360581652Z" level=info msg="connecting to shim 82799fe0790651f775c05493b62053777fe26749514fc5c0ab6a1265f10a9dc1" address="unix:///run/containerd/s/4691075aaf3e471f5b6a853bf7649db8295dce354865eb1beec07c8e32e9a69a" protocol=ttrpc version=3 Sep 16 04:52:25.382671 systemd[1]: Started cri-containerd-82799fe0790651f775c05493b62053777fe26749514fc5c0ab6a1265f10a9dc1.scope - libcontainer container 82799fe0790651f775c05493b62053777fe26749514fc5c0ab6a1265f10a9dc1. Sep 16 04:52:25.458473 containerd[1571]: time="2025-09-16T04:52:25.458393793Z" level=info msg="StartContainer for \"82799fe0790651f775c05493b62053777fe26749514fc5c0ab6a1265f10a9dc1\" returns successfully" Sep 16 04:52:26.110457 kubelet[2707]: I0916 04:52:26.109969 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-847b79dcd8-sdl2m" podStartSLOduration=39.109946021 podStartE2EDuration="39.109946021s" podCreationTimestamp="2025-09-16 04:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:52:26.109660544 +0000 UTC m=+54.642935931" watchObservedRunningTime="2025-09-16 04:52:26.109946021 +0000 UTC m=+54.643221377" Sep 16 04:52:26.251379 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:59862.service - OpenSSH per-connection server daemon (10.0.0.1:59862). Sep 16 04:52:26.257669 systemd-networkd[1466]: cali7680da065ed: Gained IPv6LL Sep 16 04:52:26.322352 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 59862 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:26.324173 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:26.329326 systemd-logind[1507]: New session 10 of user core. Sep 16 04:52:26.338715 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:52:26.523493 sshd[5021]: Connection closed by 10.0.0.1 port 59862 Sep 16 04:52:26.523775 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:26.533088 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:59862.service: Deactivated successfully. Sep 16 04:52:26.534940 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:52:26.535711 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:52:26.538942 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:59876.service - OpenSSH per-connection server daemon (10.0.0.1:59876). Sep 16 04:52:26.539676 systemd-logind[1507]: Removed session 10. Sep 16 04:52:26.601337 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 59876 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:26.602701 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:26.607608 systemd-logind[1507]: New session 11 of user core. Sep 16 04:52:26.618604 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:52:26.779600 sshd[5039]: Connection closed by 10.0.0.1 port 59876 Sep 16 04:52:26.779936 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:26.793699 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:59876.service: Deactivated successfully. Sep 16 04:52:26.796360 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:52:26.798667 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:52:26.804065 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:59878.service - OpenSSH per-connection server daemon (10.0.0.1:59878). Sep 16 04:52:26.806968 systemd-logind[1507]: Removed session 11. Sep 16 04:52:26.852531 sshd[5051]: Accepted publickey for core from 10.0.0.1 port 59878 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:26.854490 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:26.860263 systemd-logind[1507]: New session 12 of user core. Sep 16 04:52:26.867720 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:52:26.999688 sshd[5054]: Connection closed by 10.0.0.1 port 59878 Sep 16 04:52:27.000139 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:27.004305 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:59878.service: Deactivated successfully. Sep 16 04:52:27.006965 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:52:27.009162 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:52:27.010649 systemd-logind[1507]: Removed session 12. Sep 16 04:52:27.099504 kubelet[2707]: I0916 04:52:27.099276 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:52:27.670168 kubelet[2707]: E0916 04:52:27.670100 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:27.671190 containerd[1571]: time="2025-09-16T04:52:27.671147091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sq727,Uid:7b6542b1-a4a2-4245-8a25-ea1a658acd1e,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:27.683472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782815530.mount: Deactivated successfully. Sep 16 04:52:27.818693 systemd-networkd[1466]: cali63009beb47c: Link UP Sep 16 04:52:27.819695 systemd-networkd[1466]: cali63009beb47c: Gained carrier Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.737 [INFO][5077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--sq727-eth0 coredns-7c65d6cfc9- kube-system 7b6542b1-a4a2-4245-8a25-ea1a658acd1e 822 0 2025-09-16 04:51:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-sq727 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63009beb47c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.737 [INFO][5077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.770 [INFO][5097] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" HandleID="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Workload="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.771 [INFO][5097] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" HandleID="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Workload="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df2f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-sq727", "timestamp":"2025-09-16 04:52:27.770836691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.771 [INFO][5097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.771 [INFO][5097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.771 [INFO][5097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.778 [INFO][5097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.783 [INFO][5097] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.787 [INFO][5097] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.789 [INFO][5097] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.791 [INFO][5097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.791 [INFO][5097] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.793 [INFO][5097] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.797 [INFO][5097] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.807 [INFO][5097] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.807 [INFO][5097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" host="localhost" Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.807 [INFO][5097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:52:27.842199 containerd[1571]: 2025-09-16 04:52:27.808 [INFO][5097] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" HandleID="k8s-pod-network.b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Workload="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" Sep 16 04:52:27.843141 containerd[1571]: 2025-09-16 04:52:27.814 [INFO][5077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--sq727-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7b6542b1-a4a2-4245-8a25-ea1a658acd1e", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-sq727", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63009beb47c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:27.843141 containerd[1571]: 2025-09-16 04:52:27.814 [INFO][5077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" Sep 16 04:52:27.843141 containerd[1571]: 2025-09-16 04:52:27.814 [INFO][5077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63009beb47c ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" Sep 16 04:52:27.843141 containerd[1571]: 2025-09-16 04:52:27.820 [INFO][5077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" Sep 16 04:52:27.843141 containerd[1571]: 2025-09-16 04:52:27.822 [INFO][5077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--sq727-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7b6542b1-a4a2-4245-8a25-ea1a658acd1e", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b", Pod:"coredns-7c65d6cfc9-sq727", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63009beb47c", MAC:"b6:52:99:64:ef:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:52:27.843141 containerd[1571]: 2025-09-16 04:52:27.834 [INFO][5077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sq727" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sq727-eth0" Sep 16 04:52:27.872238 containerd[1571]: time="2025-09-16T04:52:27.872108118Z" level=info msg="connecting to shim b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b" address="unix:///run/containerd/s/6e44c7ebaf3b62d442d8a85155a62126cb5178256ec971d1b0ae68ca52b78b9f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:27.938619 systemd[1]: Started cri-containerd-b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b.scope - libcontainer container b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b. Sep 16 04:52:27.961353 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:52:28.205593 containerd[1571]: time="2025-09-16T04:52:28.205350724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sq727,Uid:7b6542b1-a4a2-4245-8a25-ea1a658acd1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b\"" Sep 16 04:52:28.206385 kubelet[2707]: E0916 04:52:28.206348 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:28.208200 containerd[1571]: time="2025-09-16T04:52:28.208163831Z" level=info msg="CreateContainer within sandbox \"b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:52:28.223838 containerd[1571]: time="2025-09-16T04:52:28.223784876Z" level=info msg="Container 0bfc4e351b98c932bbd64ff17f16506d6d9f4997c54681ae6b30ce1092387c96: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:28.241171 containerd[1571]: time="2025-09-16T04:52:28.241107402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:28.242911 containerd[1571]: time="2025-09-16T04:52:28.242870600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 16 04:52:28.243918 containerd[1571]: time="2025-09-16T04:52:28.243884171Z" level=info msg="CreateContainer within sandbox \"b84d757cd1acf897ab4680a0c004e65bf86c48c01ce12e96e2e8abe0d06fd10b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bfc4e351b98c932bbd64ff17f16506d6d9f4997c54681ae6b30ce1092387c96\"" Sep 16 04:52:28.244368 containerd[1571]: time="2025-09-16T04:52:28.244339396Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:28.244542 containerd[1571]: time="2025-09-16T04:52:28.244502581Z" level=info msg="StartContainer for \"0bfc4e351b98c932bbd64ff17f16506d6d9f4997c54681ae6b30ce1092387c96\"" Sep 16 04:52:28.245379 containerd[1571]: time="2025-09-16T04:52:28.245349510Z" level=info msg="connecting to shim 0bfc4e351b98c932bbd64ff17f16506d6d9f4997c54681ae6b30ce1092387c96" address="unix:///run/containerd/s/6e44c7ebaf3b62d442d8a85155a62126cb5178256ec971d1b0ae68ca52b78b9f" protocol=ttrpc version=3 Sep 16 04:52:28.246680 containerd[1571]: time="2025-09-16T04:52:28.246643167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:28.247531 containerd[1571]: time="2025-09-16T04:52:28.247487942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.813081156s" Sep 16 04:52:28.247531 containerd[1571]: time="2025-09-16T04:52:28.247523148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 16 04:52:28.249233 containerd[1571]: time="2025-09-16T04:52:28.248518736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:52:28.250356 containerd[1571]: time="2025-09-16T04:52:28.250323761Z" level=info msg="CreateContainer within sandbox \"470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:52:28.261252 containerd[1571]: time="2025-09-16T04:52:28.261207850Z" level=info msg="Container 6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:28.271091 containerd[1571]: time="2025-09-16T04:52:28.271039734Z" level=info msg="CreateContainer within sandbox \"470d9e486913ac8d8cdc44861be7a7e23e66a5f582a04dbf2e69c2e85f466654\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\"" Sep 16 04:52:28.272469 containerd[1571]: time="2025-09-16T04:52:28.271646262Z" level=info msg="StartContainer for \"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\"" Sep 16 04:52:28.273128 containerd[1571]: time="2025-09-16T04:52:28.273075533Z" level=info msg="connecting to shim 6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820" address="unix:///run/containerd/s/5d19781697759154aa8e2536c4103fcd7741a4adb47ac95fb4d5f857a282574e" protocol=ttrpc version=3 Sep 16 04:52:28.279660 systemd[1]: Started cri-containerd-0bfc4e351b98c932bbd64ff17f16506d6d9f4997c54681ae6b30ce1092387c96.scope - libcontainer container 0bfc4e351b98c932bbd64ff17f16506d6d9f4997c54681ae6b30ce1092387c96. Sep 16 04:52:28.299593 systemd[1]: Started cri-containerd-6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820.scope - libcontainer container 6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820. Sep 16 04:52:28.336370 containerd[1571]: time="2025-09-16T04:52:28.336307503Z" level=info msg="StartContainer for \"0bfc4e351b98c932bbd64ff17f16506d6d9f4997c54681ae6b30ce1092387c96\" returns successfully" Sep 16 04:52:28.367196 containerd[1571]: time="2025-09-16T04:52:28.367136677Z" level=info msg="StartContainer for \"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\" returns successfully" Sep 16 04:52:29.088264 containerd[1571]: time="2025-09-16T04:52:29.088197068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0\" id:\"a8c89476f62faf5f30e93db1dafbd30504b9ba78b59b543bb8271c4c420d39a8\" pid:5246 exited_at:{seconds:1757998349 nanos:87605579}" Sep 16 04:52:29.119477 kubelet[2707]: E0916 04:52:29.119061 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:29.156690 kubelet[2707]: I0916 04:52:29.156539 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sq727" podStartSLOduration=53.156514764 podStartE2EDuration="53.156514764s" podCreationTimestamp="2025-09-16 04:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:52:29.155848525 +0000 UTC m=+57.689123881" watchObservedRunningTime="2025-09-16 04:52:29.156514764 +0000 UTC m=+57.689790121" Sep 16 04:52:29.175941 kubelet[2707]: I0916 04:52:29.175846 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-ldwjf" podStartSLOduration=28.868528444 podStartE2EDuration="40.175827574s" podCreationTimestamp="2025-09-16 04:51:49 +0000 UTC" firstStartedPulling="2025-09-16 04:52:16.941097766 +0000 UTC m=+45.474373122" lastFinishedPulling="2025-09-16 04:52:28.248396886 +0000 UTC m=+56.781672252" observedRunningTime="2025-09-16 04:52:29.174735175 +0000 UTC m=+57.708010551" watchObservedRunningTime="2025-09-16 04:52:29.175827574 +0000 UTC m=+57.709102930" Sep 16 04:52:29.254765 containerd[1571]: time="2025-09-16T04:52:29.254683929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\" id:\"f4833c4fb572d9915bf11505d0066544ed4efb262e39e11b9c626ce6d64b3e73\" pid:5272 exit_status:1 exited_at:{seconds:1757998349 nanos:254135942}" Sep 16 04:52:29.839718 systemd-networkd[1466]: cali63009beb47c: Gained IPv6LL Sep 16 04:52:30.121396 kubelet[2707]: E0916 04:52:30.121225 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:30.231918 containerd[1571]: time="2025-09-16T04:52:30.231852101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\" id:\"c9c9b6340f33319d55914b7a9609c9b8391d2fae7146766ecdfe108c6c6eca7c\" pid:5306 exit_status:1 exited_at:{seconds:1757998350 nanos:231467971}" Sep 16 04:52:30.399939 containerd[1571]: time="2025-09-16T04:52:30.399773554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:30.400771 containerd[1571]: time="2025-09-16T04:52:30.400712706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 16 04:52:30.402075 containerd[1571]: time="2025-09-16T04:52:30.402031560Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:30.404735 containerd[1571]: time="2025-09-16T04:52:30.404702310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:30.405602 containerd[1571]: time="2025-09-16T04:52:30.405571611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.157028369s" Sep 16 04:52:30.405602 containerd[1571]: time="2025-09-16T04:52:30.405600345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 16 04:52:30.407229 containerd[1571]: time="2025-09-16T04:52:30.406941861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:52:30.408614 containerd[1571]: time="2025-09-16T04:52:30.408575195Z" level=info msg="CreateContainer within sandbox \"0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:52:30.427248 containerd[1571]: time="2025-09-16T04:52:30.427067984Z" level=info msg="Container 7749f3cc2a23377e5afecd82f870d535d081174c57e80362dcc75eabfa1cbfab: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:30.450057 containerd[1571]: time="2025-09-16T04:52:30.449973291Z" level=info msg="CreateContainer within sandbox \"0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7749f3cc2a23377e5afecd82f870d535d081174c57e80362dcc75eabfa1cbfab\"" Sep 16 04:52:30.450616 containerd[1571]: time="2025-09-16T04:52:30.450570842Z" level=info msg="StartContainer for \"7749f3cc2a23377e5afecd82f870d535d081174c57e80362dcc75eabfa1cbfab\"" Sep 16 04:52:30.452161 containerd[1571]: time="2025-09-16T04:52:30.452127772Z" level=info msg="connecting to shim 7749f3cc2a23377e5afecd82f870d535d081174c57e80362dcc75eabfa1cbfab" address="unix:///run/containerd/s/5aedfcb83ec729e9734e1dff55ac1f893184a1c06e5bcfb9b3f085e34333cd52" protocol=ttrpc version=3 Sep 16 04:52:30.484723 systemd[1]: Started cri-containerd-7749f3cc2a23377e5afecd82f870d535d081174c57e80362dcc75eabfa1cbfab.scope - libcontainer container 7749f3cc2a23377e5afecd82f870d535d081174c57e80362dcc75eabfa1cbfab. Sep 16 04:52:30.550874 containerd[1571]: time="2025-09-16T04:52:30.550802811Z" level=info msg="StartContainer for \"7749f3cc2a23377e5afecd82f870d535d081174c57e80362dcc75eabfa1cbfab\" returns successfully" Sep 16 04:52:31.126479 kubelet[2707]: E0916 04:52:31.126414 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:32.016505 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:59260.service - OpenSSH per-connection server daemon (10.0.0.1:59260). Sep 16 04:52:32.117645 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 59260 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:32.120452 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:32.127385 systemd-logind[1507]: New session 13 of user core. Sep 16 04:52:32.132631 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:52:32.171337 containerd[1571]: time="2025-09-16T04:52:32.171271580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682\" id:\"538015bed8b48f644cd2a5ad1ecb598e5e078251810736b28ff8577b9e106517\" pid:5367 exited_at:{seconds:1757998352 nanos:169343823}" Sep 16 04:52:32.173334 containerd[1571]: time="2025-09-16T04:52:32.173190739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\" id:\"25fcc8205432593c02459f80c11bd9f076a3a127b5955ae16d47d4b3e4c5cbd4\" pid:5385 exited_at:{seconds:1757998352 nanos:172727641}" Sep 16 04:52:32.294888 sshd[5396]: Connection closed by 10.0.0.1 port 59260 Sep 16 04:52:32.295259 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:32.300803 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:59260.service: Deactivated successfully. Sep 16 04:52:32.303537 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:52:32.304322 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:52:32.306035 systemd-logind[1507]: Removed session 13. Sep 16 04:52:33.714262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720991860.mount: Deactivated successfully. Sep 16 04:52:33.740337 containerd[1571]: time="2025-09-16T04:52:33.740253248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:33.741260 containerd[1571]: time="2025-09-16T04:52:33.741176745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 16 04:52:33.743040 containerd[1571]: time="2025-09-16T04:52:33.742980170Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:33.745551 containerd[1571]: time="2025-09-16T04:52:33.745496065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:33.746096 containerd[1571]: time="2025-09-16T04:52:33.746049637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.339056059s" Sep 16 04:52:33.746096 containerd[1571]: time="2025-09-16T04:52:33.746083570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 16 04:52:33.747450 containerd[1571]: time="2025-09-16T04:52:33.747403575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:52:33.748565 containerd[1571]: time="2025-09-16T04:52:33.748525818Z" level=info msg="CreateContainer within sandbox \"1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:52:33.760007 containerd[1571]: time="2025-09-16T04:52:33.759932366Z" level=info msg="Container d0030bc743ca072f99551d4d5926735c6492f147eaa01f9500a2388ba589d6a3: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:33.770623 containerd[1571]: time="2025-09-16T04:52:33.770569466Z" level=info msg="CreateContainer within sandbox \"1992d7c42db44ba27aeb0d2733f1440c3cd1326d968a20729ca9d2118be8ae66\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d0030bc743ca072f99551d4d5926735c6492f147eaa01f9500a2388ba589d6a3\"" Sep 16 04:52:33.771249 containerd[1571]: time="2025-09-16T04:52:33.771204903Z" level=info msg="StartContainer for \"d0030bc743ca072f99551d4d5926735c6492f147eaa01f9500a2388ba589d6a3\"" Sep 16 04:52:33.772659 containerd[1571]: time="2025-09-16T04:52:33.772629964Z" level=info msg="connecting to shim d0030bc743ca072f99551d4d5926735c6492f147eaa01f9500a2388ba589d6a3" address="unix:///run/containerd/s/5f4574d3a6de40f70a4dc037613c7f28744c2c425587aab908300debe324a74e" protocol=ttrpc version=3 Sep 16 04:52:33.805634 systemd[1]: Started cri-containerd-d0030bc743ca072f99551d4d5926735c6492f147eaa01f9500a2388ba589d6a3.scope - libcontainer container d0030bc743ca072f99551d4d5926735c6492f147eaa01f9500a2388ba589d6a3. Sep 16 04:52:33.865749 containerd[1571]: time="2025-09-16T04:52:33.865609916Z" level=info msg="StartContainer for \"d0030bc743ca072f99551d4d5926735c6492f147eaa01f9500a2388ba589d6a3\" returns successfully" Sep 16 04:52:34.179380 kubelet[2707]: I0916 04:52:34.179296 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d85dfdfc7-ksrcl" podStartSLOduration=1.680015757 podStartE2EDuration="19.179271762s" podCreationTimestamp="2025-09-16 04:52:15 +0000 UTC" firstStartedPulling="2025-09-16 04:52:16.24788155 +0000 UTC m=+44.781156906" lastFinishedPulling="2025-09-16 04:52:33.747137555 +0000 UTC m=+62.280412911" observedRunningTime="2025-09-16 04:52:34.178802414 +0000 UTC m=+62.712077780" watchObservedRunningTime="2025-09-16 04:52:34.179271762 +0000 UTC m=+62.712547128" Sep 16 04:52:35.420917 containerd[1571]: time="2025-09-16T04:52:35.420817341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:35.421766 containerd[1571]: time="2025-09-16T04:52:35.421700827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 16 04:52:35.423221 containerd[1571]: time="2025-09-16T04:52:35.423175245Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:35.426221 containerd[1571]: time="2025-09-16T04:52:35.426158968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:35.427111 containerd[1571]: time="2025-09-16T04:52:35.427069807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.679603975s" Sep 16 04:52:35.427111 containerd[1571]: time="2025-09-16T04:52:35.427107319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 16 04:52:35.430378 containerd[1571]: time="2025-09-16T04:52:35.430335895Z" level=info msg="CreateContainer within sandbox \"0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:52:35.439988 containerd[1571]: time="2025-09-16T04:52:35.439921849Z" level=info msg="Container e91a54642a1fe82c48c163b54e9233d7bce460085d3b6fdc25c4cfddc753ac69: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:35.457684 containerd[1571]: time="2025-09-16T04:52:35.457621313Z" level=info msg="CreateContainer within sandbox \"0d6f7d79bc6da7808741eb2c7b5db49d521f9a200acfd7f414af92382a6abc85\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e91a54642a1fe82c48c163b54e9233d7bce460085d3b6fdc25c4cfddc753ac69\"" Sep 16 04:52:35.458257 containerd[1571]: time="2025-09-16T04:52:35.458191855Z" level=info msg="StartContainer for \"e91a54642a1fe82c48c163b54e9233d7bce460085d3b6fdc25c4cfddc753ac69\"" Sep 16 04:52:35.459958 containerd[1571]: time="2025-09-16T04:52:35.459923971Z" level=info msg="connecting to shim e91a54642a1fe82c48c163b54e9233d7bce460085d3b6fdc25c4cfddc753ac69" address="unix:///run/containerd/s/5aedfcb83ec729e9734e1dff55ac1f893184a1c06e5bcfb9b3f085e34333cd52" protocol=ttrpc version=3 Sep 16 04:52:35.487739 systemd[1]: Started cri-containerd-e91a54642a1fe82c48c163b54e9233d7bce460085d3b6fdc25c4cfddc753ac69.scope - libcontainer container e91a54642a1fe82c48c163b54e9233d7bce460085d3b6fdc25c4cfddc753ac69. Sep 16 04:52:35.538742 containerd[1571]: time="2025-09-16T04:52:35.538689787Z" level=info msg="StartContainer for \"e91a54642a1fe82c48c163b54e9233d7bce460085d3b6fdc25c4cfddc753ac69\" returns successfully" Sep 16 04:52:35.769206 kubelet[2707]: I0916 04:52:35.769027 2707 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:52:35.769206 kubelet[2707]: I0916 04:52:35.769078 2707 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:52:36.173826 kubelet[2707]: I0916 04:52:36.172585 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mxpq2" podStartSLOduration=29.689250155 podStartE2EDuration="47.172559011s" podCreationTimestamp="2025-09-16 04:51:49 +0000 UTC" firstStartedPulling="2025-09-16 04:52:17.944523073 +0000 UTC m=+46.477798429" lastFinishedPulling="2025-09-16 04:52:35.427831939 +0000 UTC m=+63.961107285" observedRunningTime="2025-09-16 04:52:36.172269112 +0000 UTC m=+64.705544488" watchObservedRunningTime="2025-09-16 04:52:36.172559011 +0000 UTC m=+64.705834387" Sep 16 04:52:37.310193 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:59272.service - OpenSSH per-connection server daemon (10.0.0.1:59272). Sep 16 04:52:37.390683 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 59272 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:37.392782 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:37.398800 systemd-logind[1507]: New session 14 of user core. Sep 16 04:52:37.404618 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:52:37.539804 sshd[5509]: Connection closed by 10.0.0.1 port 59272 Sep 16 04:52:37.540333 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:37.546002 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:59272.service: Deactivated successfully. Sep 16 04:52:37.549033 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:52:37.550834 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:52:37.552524 systemd-logind[1507]: Removed session 14. Sep 16 04:52:38.570633 kubelet[2707]: I0916 04:52:38.570516 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:52:39.104454 kubelet[2707]: I0916 04:52:39.104374 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:52:42.554151 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:40834.service - OpenSSH per-connection server daemon (10.0.0.1:40834). Sep 16 04:52:42.620120 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 40834 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:42.621842 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:42.627004 systemd-logind[1507]: New session 15 of user core. Sep 16 04:52:42.638649 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:52:42.759681 sshd[5539]: Connection closed by 10.0.0.1 port 40834 Sep 16 04:52:42.760035 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:42.765279 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:40834.service: Deactivated successfully. Sep 16 04:52:42.767839 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:52:42.769284 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:52:42.771374 systemd-logind[1507]: Removed session 15. Sep 16 04:52:43.705642 containerd[1571]: time="2025-09-16T04:52:43.705585263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\" id:\"229111050e7708e052535c8b715acd150e186c4327fb1b6d58b2604d1e0e03e1\" pid:5564 exited_at:{seconds:1757998363 nanos:705211726}" Sep 16 04:52:45.670690 kubelet[2707]: E0916 04:52:45.670597 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:52:47.776217 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:40850.service - OpenSSH per-connection server daemon (10.0.0.1:40850). Sep 16 04:52:47.857451 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 40850 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:47.859297 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:47.863989 systemd-logind[1507]: New session 16 of user core. Sep 16 04:52:47.873557 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:52:48.029545 sshd[5578]: Connection closed by 10.0.0.1 port 40850 Sep 16 04:52:48.030108 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:48.042023 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:40850.service: Deactivated successfully. Sep 16 04:52:48.044535 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:52:48.045566 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:52:48.049814 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:40858.service - OpenSSH per-connection server daemon (10.0.0.1:40858). Sep 16 04:52:48.050806 systemd-logind[1507]: Removed session 16. Sep 16 04:52:48.111828 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 40858 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:48.114006 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:48.119800 systemd-logind[1507]: New session 17 of user core. Sep 16 04:52:48.130639 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:52:48.412470 sshd[5596]: Connection closed by 10.0.0.1 port 40858 Sep 16 04:52:48.413055 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:48.424709 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:40858.service: Deactivated successfully. Sep 16 04:52:48.427523 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:52:48.428596 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:52:48.432854 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:40874.service - OpenSSH per-connection server daemon (10.0.0.1:40874). Sep 16 04:52:48.433759 systemd-logind[1507]: Removed session 17. Sep 16 04:52:48.498338 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 40874 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:48.500246 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:48.505556 systemd-logind[1507]: New session 18 of user core. Sep 16 04:52:48.516728 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:52:50.400472 sshd[5610]: Connection closed by 10.0.0.1 port 40874 Sep 16 04:52:50.400829 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:50.418349 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:49002.service - OpenSSH per-connection server daemon (10.0.0.1:49002). Sep 16 04:52:50.426926 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:40874.service: Deactivated successfully. Sep 16 04:52:50.435028 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:52:50.437830 systemd[1]: session-18.scope: Consumed 734ms CPU time, 79.5M memory peak. Sep 16 04:52:50.442812 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:52:50.444528 systemd-logind[1507]: Removed session 18. Sep 16 04:52:50.512245 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 49002 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:50.514019 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:50.520386 systemd-logind[1507]: New session 19 of user core. Sep 16 04:52:50.529581 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:52:50.933940 sshd[5635]: Connection closed by 10.0.0.1 port 49002 Sep 16 04:52:50.935999 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:50.946824 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:49002.service: Deactivated successfully. Sep 16 04:52:50.951231 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:52:50.952408 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:52:50.957469 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:49008.service - OpenSSH per-connection server daemon (10.0.0.1:49008). Sep 16 04:52:50.958727 systemd-logind[1507]: Removed session 19. Sep 16 04:52:51.012578 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 49008 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:51.014302 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:51.020324 systemd-logind[1507]: New session 20 of user core. Sep 16 04:52:51.030608 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:52:51.152112 sshd[5650]: Connection closed by 10.0.0.1 port 49008 Sep 16 04:52:51.152721 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:51.158836 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:49008.service: Deactivated successfully. Sep 16 04:52:51.161522 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:52:51.162916 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:52:51.164980 systemd-logind[1507]: Removed session 20. Sep 16 04:52:56.168899 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:49020.service - OpenSSH per-connection server daemon (10.0.0.1:49020). Sep 16 04:52:56.228067 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 49020 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:52:56.230527 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:56.236003 systemd-logind[1507]: New session 21 of user core. Sep 16 04:52:56.243797 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:52:56.383326 sshd[5667]: Connection closed by 10.0.0.1 port 49020 Sep 16 04:52:56.384006 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:56.389520 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:49020.service: Deactivated successfully. Sep 16 04:52:56.392291 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:52:56.394292 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:52:56.396071 systemd-logind[1507]: Removed session 21. Sep 16 04:52:59.038550 containerd[1571]: time="2025-09-16T04:52:59.038469132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87bb5b137211b45436b07ef2d37455f7d4f1f5f8bb9ce550c94754c69563caa0\" id:\"9c1b10056f77b45a32ce920317442ab6e34d3b62d70c6c9369d56594040cb3f6\" pid:5701 exited_at:{seconds:1757998379 nanos:37981663}" Sep 16 04:53:01.402004 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:56434.service - OpenSSH per-connection server daemon (10.0.0.1:56434). Sep 16 04:53:01.489170 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 56434 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:53:01.491419 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:01.496703 systemd-logind[1507]: New session 22 of user core. Sep 16 04:53:01.504732 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:53:01.680009 kubelet[2707]: E0916 04:53:01.678764 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:53:01.806893 sshd[5718]: Connection closed by 10.0.0.1 port 56434 Sep 16 04:53:01.807807 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:01.818028 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:56434.service: Deactivated successfully. Sep 16 04:53:01.822542 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:53:01.823908 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:53:01.826003 systemd-logind[1507]: Removed session 22. Sep 16 04:53:02.125113 containerd[1571]: time="2025-09-16T04:53:02.125042973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93b3c4140cd5ae342c6658a1a7369dc7231b43b0c1bcd2f2eb64f51e3f73c682\" id:\"813086baf89848135dbb491154a11c04f50646813f3e67f3509d80d6a73cb096\" pid:5744 exited_at:{seconds:1757998382 nanos:117629708}" Sep 16 04:53:02.180237 containerd[1571]: time="2025-09-16T04:53:02.180171733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6145b968adc0f25af90a4037e11423f1999c357e8a44e29652b3107d4d0c1820\" id:\"f636d3f66474c8a9218e1e1674b1b305926e613d65a0d55d63e7521de18002a9\" pid:5763 exited_at:{seconds:1757998382 nanos:179587581}" Sep 16 04:53:02.669811 kubelet[2707]: E0916 04:53:02.669738 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:53:03.670830 kubelet[2707]: E0916 04:53:03.670745 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:53:06.819802 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:56436.service - OpenSSH per-connection server daemon (10.0.0.1:56436). Sep 16 04:53:06.878108 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 56436 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:53:06.879989 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:06.885090 systemd-logind[1507]: New session 23 of user core. Sep 16 04:53:06.899726 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:53:07.136170 sshd[5782]: Connection closed by 10.0.0.1 port 56436 Sep 16 04:53:07.137870 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:07.141813 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:56436.service: Deactivated successfully. Sep 16 04:53:07.144666 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:53:07.146815 systemd-logind[1507]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:53:07.148686 systemd-logind[1507]: Removed session 23. Sep 16 04:53:12.153179 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:51432.service - OpenSSH per-connection server daemon (10.0.0.1:51432). Sep 16 04:53:12.245998 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 51432 ssh2: RSA SHA256:mbQbrRoQoFei5kIXvdhlqPTOzK4bL8i/kdyxZ8Q4lDE Sep 16 04:53:12.249316 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:12.254813 systemd-logind[1507]: New session 24 of user core. Sep 16 04:53:12.262612 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:53:12.431582 sshd[5803]: Connection closed by 10.0.0.1 port 51432 Sep 16 04:53:12.432174 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:12.436529 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:51432.service: Deactivated successfully. Sep 16 04:53:12.439053 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:53:12.441018 systemd-logind[1507]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:53:12.443291 systemd-logind[1507]: Removed session 24.