Sep 4 04:25:38.878718 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 02:15:54 -00 2025 Sep 4 04:25:38.878750 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:25:38.878767 kernel: BIOS-provided physical RAM map: Sep 4 04:25:38.878776 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 04:25:38.878785 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 04:25:38.878794 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 04:25:38.878805 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 04:25:38.878814 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 04:25:38.878833 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 04:25:38.878843 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 04:25:38.878852 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 4 04:25:38.878862 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 04:25:38.878871 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 04:25:38.878881 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 04:25:38.878898 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 04:25:38.878908 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 04:25:38.878923 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 4 04:25:38.878933 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 4 04:25:38.878943 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 4 04:25:38.878954 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 4 04:25:38.878964 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 04:25:38.878974 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 04:25:38.878984 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 04:25:38.878995 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 04:25:38.879005 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 04:25:38.879019 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 04:25:38.879029 kernel: NX (Execute Disable) protection: active Sep 4 04:25:38.879039 kernel: APIC: Static calls initialized Sep 4 04:25:38.879049 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 4 04:25:38.879060 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 4 04:25:38.879070 kernel: extended physical RAM map: Sep 4 04:25:38.879080 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 04:25:38.879090 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 04:25:38.879100 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 04:25:38.879111 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 04:25:38.879121 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 04:25:38.879136 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 04:25:38.879146 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 04:25:38.879157 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 4 04:25:38.879167 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 4 04:25:38.879184 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 4 04:25:38.879195 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 4 04:25:38.879209 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 4 04:25:38.879221 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 04:25:38.879231 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 04:25:38.879240 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 04:25:38.879250 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 04:25:38.879260 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 04:25:38.879270 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 4 04:25:38.879279 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 4 04:25:38.879330 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 4 04:25:38.879341 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 4 04:25:38.879357 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 04:25:38.879368 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 04:25:38.879377 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 04:25:38.879388 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 04:25:38.879398 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 04:25:38.879409 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 04:25:38.879425 kernel: efi: EFI v2.7 by EDK II Sep 4 04:25:38.879435 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 4 04:25:38.879445 kernel: random: crng init done Sep 4 04:25:38.879460 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 4 04:25:38.879471 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 4 04:25:38.879490 kernel: secureboot: Secure boot disabled Sep 4 04:25:38.879501 kernel: SMBIOS 2.8 present. Sep 4 04:25:38.879512 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 4 04:25:38.879523 kernel: DMI: Memory slots populated: 1/1 Sep 4 04:25:38.879533 kernel: Hypervisor detected: KVM Sep 4 04:25:38.879543 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 04:25:38.879553 kernel: kvm-clock: using sched offset of 6386459789 cycles Sep 4 04:25:38.879565 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 04:25:38.879576 kernel: tsc: Detected 2794.750 MHz processor Sep 4 04:25:38.879586 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 04:25:38.879602 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 04:25:38.879613 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 4 04:25:38.879623 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 04:25:38.879634 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 04:25:38.879645 kernel: Using GB pages for direct mapping Sep 4 04:25:38.879656 kernel: ACPI: Early table checksum verification disabled Sep 4 04:25:38.879667 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 04:25:38.879678 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 4 04:25:38.879689 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:38.879705 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:38.879716 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 04:25:38.879727 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:38.879738 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:38.879749 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:38.879760 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:38.879771 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 04:25:38.879782 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 4 04:25:38.879793 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 4 04:25:38.879810 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 04:25:38.879821 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 4 04:25:38.879832 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 4 04:25:38.879843 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 4 04:25:38.879854 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 4 04:25:38.879865 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 4 04:25:38.879876 kernel: No NUMA configuration found Sep 4 04:25:38.879887 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 4 04:25:38.879898 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 4 04:25:38.879915 kernel: Zone ranges: Sep 4 04:25:38.879926 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 04:25:38.879954 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 4 04:25:38.879965 kernel: Normal empty Sep 4 04:25:38.879975 kernel: Device empty Sep 4 04:25:38.879985 kernel: Movable zone start for each node Sep 4 04:25:38.879995 kernel: Early memory node ranges Sep 4 04:25:38.880006 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 04:25:38.880016 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 04:25:38.880027 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 04:25:38.880055 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 4 04:25:38.880079 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 4 04:25:38.880089 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 4 04:25:38.880100 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 4 04:25:38.880111 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 4 04:25:38.880122 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 4 04:25:38.880132 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 04:25:38.880168 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 04:25:38.880208 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 04:25:38.880219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 04:25:38.880230 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 4 04:25:38.880241 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 4 04:25:38.880256 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 04:25:38.880267 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 4 04:25:38.880278 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 4 04:25:38.880308 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 04:25:38.880328 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 04:25:38.880343 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 04:25:38.880354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 04:25:38.880365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 04:25:38.880375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 04:25:38.880386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 04:25:38.880397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 04:25:38.880407 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 04:25:38.880418 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 04:25:38.880428 kernel: TSC deadline timer available Sep 4 04:25:38.880439 kernel: CPU topo: Max. logical packages: 1 Sep 4 04:25:38.880447 kernel: CPU topo: Max. logical dies: 1 Sep 4 04:25:38.880455 kernel: CPU topo: Max. dies per package: 1 Sep 4 04:25:38.880463 kernel: CPU topo: Max. threads per core: 1 Sep 4 04:25:38.880471 kernel: CPU topo: Num. cores per package: 4 Sep 4 04:25:38.880479 kernel: CPU topo: Num. threads per package: 4 Sep 4 04:25:38.880487 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 04:25:38.880495 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 04:25:38.880503 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 04:25:38.880514 kernel: kvm-guest: setup PV sched yield Sep 4 04:25:38.880522 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 4 04:25:38.880529 kernel: Booting paravirtualized kernel on KVM Sep 4 04:25:38.880538 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 04:25:38.880546 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 04:25:38.880554 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 04:25:38.880562 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 04:25:38.880570 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 04:25:38.880578 kernel: kvm-guest: PV spinlocks enabled Sep 4 04:25:38.880589 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 04:25:38.880598 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:25:38.880611 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 04:25:38.880619 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 04:25:38.880627 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 04:25:38.880635 kernel: Fallback order for Node 0: 0 Sep 4 04:25:38.880643 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 4 04:25:38.880651 kernel: Policy zone: DMA32 Sep 4 04:25:38.880661 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 04:25:38.880669 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 04:25:38.880677 kernel: ftrace: allocating 40102 entries in 157 pages Sep 4 04:25:38.880685 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 04:25:38.880693 kernel: Dynamic Preempt: voluntary Sep 4 04:25:38.880701 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 04:25:38.880710 kernel: rcu: RCU event tracing is enabled. Sep 4 04:25:38.880719 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 04:25:38.880727 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 04:25:38.880738 kernel: Rude variant of Tasks RCU enabled. Sep 4 04:25:38.880746 kernel: Tracing variant of Tasks RCU enabled. Sep 4 04:25:38.880754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 04:25:38.880765 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 04:25:38.880773 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:25:38.880781 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:25:38.880789 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:25:38.880797 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 04:25:38.880805 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 04:25:38.880816 kernel: Console: colour dummy device 80x25 Sep 4 04:25:38.880824 kernel: printk: legacy console [ttyS0] enabled Sep 4 04:25:38.880832 kernel: ACPI: Core revision 20240827 Sep 4 04:25:38.880840 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 04:25:38.880848 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 04:25:38.880856 kernel: x2apic enabled Sep 4 04:25:38.880864 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 04:25:38.880872 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 04:25:38.880880 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 04:25:38.880891 kernel: kvm-guest: setup PV IPIs Sep 4 04:25:38.880899 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 04:25:38.880907 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 04:25:38.880915 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 04:25:38.880923 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 04:25:38.880931 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 04:25:38.880939 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 04:25:38.880947 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 04:25:38.880955 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 04:25:38.880966 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 04:25:38.880974 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 04:25:38.880982 kernel: active return thunk: retbleed_return_thunk Sep 4 04:25:38.880990 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 04:25:38.881000 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 04:25:38.881008 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 04:25:38.881017 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 04:25:38.881029 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 04:25:38.881043 kernel: active return thunk: srso_return_thunk Sep 4 04:25:38.881054 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 04:25:38.881065 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 04:25:38.881075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 04:25:38.881086 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 04:25:38.881097 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 04:25:38.881107 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 04:25:38.881118 kernel: Freeing SMP alternatives memory: 32K Sep 4 04:25:38.881129 kernel: pid_max: default: 32768 minimum: 301 Sep 4 04:25:38.881144 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 04:25:38.881155 kernel: landlock: Up and running. Sep 4 04:25:38.881166 kernel: SELinux: Initializing. Sep 4 04:25:38.881177 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 04:25:38.881189 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 04:25:38.881200 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 04:25:38.881211 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 04:25:38.881222 kernel: ... version: 0 Sep 4 04:25:38.881234 kernel: ... bit width: 48 Sep 4 04:25:38.881249 kernel: ... generic registers: 6 Sep 4 04:25:38.881260 kernel: ... value mask: 0000ffffffffffff Sep 4 04:25:38.881272 kernel: ... max period: 00007fffffffffff Sep 4 04:25:38.881302 kernel: ... fixed-purpose events: 0 Sep 4 04:25:38.881322 kernel: ... event mask: 000000000000003f Sep 4 04:25:38.881333 kernel: signal: max sigframe size: 1776 Sep 4 04:25:38.881343 kernel: rcu: Hierarchical SRCU implementation. Sep 4 04:25:38.881354 kernel: rcu: Max phase no-delay instances is 400. Sep 4 04:25:38.881369 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 04:25:38.881380 kernel: smp: Bringing up secondary CPUs ... Sep 4 04:25:38.881395 kernel: smpboot: x86: Booting SMP configuration: Sep 4 04:25:38.881407 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 04:25:38.881418 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 04:25:38.881429 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 04:25:38.881440 kernel: Memory: 2420628K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 57768K init, 1248K bss, 139244K reserved, 0K cma-reserved) Sep 4 04:25:38.881451 kernel: devtmpfs: initialized Sep 4 04:25:38.881462 kernel: x86/mm: Memory block size: 128MB Sep 4 04:25:38.881472 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 04:25:38.881483 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 04:25:38.881498 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 4 04:25:38.881509 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 04:25:38.881520 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 4 04:25:38.881530 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 04:25:38.881541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 04:25:38.881551 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 04:25:38.881559 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 04:25:38.881574 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 04:25:38.881589 kernel: audit: initializing netlink subsys (disabled) Sep 4 04:25:38.881601 kernel: audit: type=2000 audit(1756959934.683:1): state=initialized audit_enabled=0 res=1 Sep 4 04:25:38.881613 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 04:25:38.881629 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 04:25:38.881645 kernel: cpuidle: using governor menu Sep 4 04:25:38.881657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 04:25:38.881664 kernel: dca service started, version 1.12.1 Sep 4 04:25:38.881672 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 4 04:25:38.881680 kernel: PCI: Using configuration type 1 for base access Sep 4 04:25:38.881691 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 04:25:38.881699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 04:25:38.881707 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 04:25:38.881715 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 04:25:38.881723 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 04:25:38.881733 kernel: ACPI: Added _OSI(Module Device) Sep 4 04:25:38.881742 kernel: ACPI: Added _OSI(Processor Device) Sep 4 04:25:38.881752 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 04:25:38.881761 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 04:25:38.881771 kernel: ACPI: Interpreter enabled Sep 4 04:25:38.881779 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 04:25:38.881787 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 04:25:38.881795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 04:25:38.881805 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 04:25:38.881816 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 04:25:38.881827 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 04:25:38.882223 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 04:25:38.882423 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 04:25:38.882560 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 04:25:38.882572 kernel: PCI host bridge to bus 0000:00 Sep 4 04:25:38.882714 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 04:25:38.882830 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 04:25:38.882942 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 04:25:38.883053 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 4 04:25:38.883172 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 4 04:25:38.883301 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 4 04:25:38.883447 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 04:25:38.883614 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 04:25:38.883782 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 04:25:38.883944 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 4 04:25:38.884080 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 4 04:25:38.884204 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 4 04:25:38.884413 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 04:25:38.884628 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 04:25:38.884757 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 4 04:25:38.884880 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 4 04:25:38.885006 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 4 04:25:38.885173 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 04:25:38.885323 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 4 04:25:38.885464 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 4 04:25:38.885589 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 4 04:25:38.885732 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 04:25:38.885857 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 4 04:25:38.885980 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 4 04:25:38.886110 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 4 04:25:38.886232 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 4 04:25:38.886450 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 04:25:38.886577 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 04:25:38.886715 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 04:25:38.886840 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 4 04:25:38.886962 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 4 04:25:38.887107 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 04:25:38.887232 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 4 04:25:38.887243 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 04:25:38.887252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 04:25:38.887260 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 04:25:38.887268 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 04:25:38.887276 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 04:25:38.887303 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 04:25:38.887311 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 04:25:38.887329 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 04:25:38.887337 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 04:25:38.887345 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 04:25:38.887355 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 04:25:38.887366 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 04:25:38.887376 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 04:25:38.887387 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 04:25:38.887399 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 04:25:38.887408 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 04:25:38.887416 kernel: iommu: Default domain type: Translated Sep 4 04:25:38.887424 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 04:25:38.887431 kernel: efivars: Registered efivars operations Sep 4 04:25:38.887439 kernel: PCI: Using ACPI for IRQ routing Sep 4 04:25:38.887447 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 04:25:38.887455 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 04:25:38.887463 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 4 04:25:38.887471 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 4 04:25:38.887481 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 4 04:25:38.887489 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 4 04:25:38.887497 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 4 04:25:38.887505 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 4 04:25:38.887513 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 4 04:25:38.887645 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 04:25:38.887770 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 04:25:38.887898 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 04:25:38.887910 kernel: vgaarb: loaded Sep 4 04:25:38.887918 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 04:25:38.887926 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 04:25:38.887934 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 04:25:38.887942 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 04:25:38.887950 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 04:25:38.887958 kernel: pnp: PnP ACPI init Sep 4 04:25:38.888125 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 4 04:25:38.888144 kernel: pnp: PnP ACPI: found 6 devices Sep 4 04:25:38.888152 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 04:25:38.888161 kernel: NET: Registered PF_INET protocol family Sep 4 04:25:38.888169 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 04:25:38.888177 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 04:25:38.888185 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 04:25:38.888194 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 04:25:38.888202 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 04:25:38.888213 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 04:25:38.888221 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 04:25:38.888230 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 04:25:38.888238 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 04:25:38.888246 kernel: NET: Registered PF_XDP protocol family Sep 4 04:25:38.888409 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 4 04:25:38.888539 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 4 04:25:38.888655 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 04:25:38.888774 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 04:25:38.888887 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 04:25:38.889000 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 4 04:25:38.889119 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 4 04:25:38.889232 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 4 04:25:38.889244 kernel: PCI: CLS 0 bytes, default 64 Sep 4 04:25:38.889253 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 04:25:38.889261 kernel: Initialise system trusted keyrings Sep 4 04:25:38.889273 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 04:25:38.889309 kernel: Key type asymmetric registered Sep 4 04:25:38.889326 kernel: Asymmetric key parser 'x509' registered Sep 4 04:25:38.889334 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 04:25:38.889343 kernel: io scheduler mq-deadline registered Sep 4 04:25:38.889353 kernel: io scheduler kyber registered Sep 4 04:25:38.889368 kernel: io scheduler bfq registered Sep 4 04:25:38.889379 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 04:25:38.889390 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 04:25:38.889398 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 04:25:38.889407 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 04:25:38.889415 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 04:25:38.889424 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 04:25:38.889432 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 04:25:38.889440 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 04:25:38.889449 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 04:25:38.889625 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 04:25:38.889638 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 04:25:38.889756 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 04:25:38.889873 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T04:25:38 UTC (1756959938) Sep 4 04:25:38.890000 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 04:25:38.890012 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 04:25:38.890021 kernel: efifb: probing for efifb Sep 4 04:25:38.890033 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 4 04:25:38.890042 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 4 04:25:38.890050 kernel: efifb: scrolling: redraw Sep 4 04:25:38.890058 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 04:25:38.890066 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 04:25:38.890075 kernel: fb0: EFI VGA frame buffer device Sep 4 04:25:38.890083 kernel: pstore: Using crash dump compression: deflate Sep 4 04:25:38.890091 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 04:25:38.890100 kernel: NET: Registered PF_INET6 protocol family Sep 4 04:25:38.890110 kernel: Segment Routing with IPv6 Sep 4 04:25:38.890119 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 04:25:38.890127 kernel: NET: Registered PF_PACKET protocol family Sep 4 04:25:38.890135 kernel: Key type dns_resolver registered Sep 4 04:25:38.890143 kernel: IPI shorthand broadcast: enabled Sep 4 04:25:38.890152 kernel: sched_clock: Marking stable (3535002959, 291927208)->(3988903142, -161972975) Sep 4 04:25:38.890160 kernel: registered taskstats version 1 Sep 4 04:25:38.890168 kernel: Loading compiled-in X.509 certificates Sep 4 04:25:38.890177 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 2c6c093c583f207375cbe16db1a23ce651c8380d' Sep 4 04:25:38.890185 kernel: Demotion targets for Node 0: null Sep 4 04:25:38.890196 kernel: Key type .fscrypt registered Sep 4 04:25:38.890204 kernel: Key type fscrypt-provisioning registered Sep 4 04:25:38.890212 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 04:25:38.890220 kernel: ima: Allocated hash algorithm: sha1 Sep 4 04:25:38.890228 kernel: ima: No architecture policies found Sep 4 04:25:38.890236 kernel: clk: Disabling unused clocks Sep 4 04:25:38.890244 kernel: Warning: unable to open an initial console. Sep 4 04:25:38.890253 kernel: Freeing unused kernel image (initmem) memory: 57768K Sep 4 04:25:38.890263 kernel: Write protecting the kernel read-only data: 24576k Sep 4 04:25:38.890272 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 4 04:25:38.890295 kernel: Run /init as init process Sep 4 04:25:38.890304 kernel: with arguments: Sep 4 04:25:38.890318 kernel: /init Sep 4 04:25:38.890327 kernel: with environment: Sep 4 04:25:38.890335 kernel: HOME=/ Sep 4 04:25:38.890343 kernel: TERM=linux Sep 4 04:25:38.890353 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 04:25:38.890366 systemd[1]: Successfully made /usr/ read-only. Sep 4 04:25:38.890385 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 04:25:38.890395 systemd[1]: Detected virtualization kvm. Sep 4 04:25:38.890403 systemd[1]: Detected architecture x86-64. Sep 4 04:25:38.890412 systemd[1]: Running in initrd. Sep 4 04:25:38.890420 systemd[1]: No hostname configured, using default hostname. Sep 4 04:25:38.890429 systemd[1]: Hostname set to . Sep 4 04:25:38.890441 systemd[1]: Initializing machine ID from VM UUID. Sep 4 04:25:38.890449 systemd[1]: Queued start job for default target initrd.target. Sep 4 04:25:38.890458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:25:38.890467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:25:38.890477 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 04:25:38.890486 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 04:25:38.890498 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 04:25:38.890507 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 04:25:38.890520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 04:25:38.890529 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 04:25:38.890538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:25:38.890547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:25:38.890556 systemd[1]: Reached target paths.target - Path Units. Sep 4 04:25:38.890567 systemd[1]: Reached target slices.target - Slice Units. Sep 4 04:25:38.890576 systemd[1]: Reached target swap.target - Swaps. Sep 4 04:25:38.890587 systemd[1]: Reached target timers.target - Timer Units. Sep 4 04:25:38.890599 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 04:25:38.890607 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 04:25:38.890616 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 04:25:38.890625 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 04:25:38.890634 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:25:38.890643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 04:25:38.890652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:25:38.890660 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 04:25:38.890672 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 04:25:38.890680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 04:25:38.890689 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 04:25:38.890699 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 04:25:38.890707 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 04:25:38.890716 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 04:25:38.890725 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 04:25:38.890734 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:38.890743 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 04:25:38.890755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:25:38.890764 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 04:25:38.890773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 04:25:38.890806 systemd-journald[220]: Collecting audit messages is disabled. Sep 4 04:25:38.890829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:38.890839 systemd-journald[220]: Journal started Sep 4 04:25:38.890861 systemd-journald[220]: Runtime Journal (/run/log/journal/21381edcb99a4f9f88b284d893dc2b61) is 6M, max 48.5M, 42.4M free. Sep 4 04:25:38.882211 systemd-modules-load[222]: Inserted module 'overlay' Sep 4 04:25:38.922328 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 04:25:38.929411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 04:25:38.931354 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 04:25:38.933555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 04:25:38.940425 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 04:25:38.948319 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 04:25:38.951975 kernel: Bridge firewalling registered Sep 4 04:25:38.950975 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 4 04:25:38.953165 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 04:25:38.953261 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 04:25:38.956661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:25:39.008943 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:25:39.022505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:25:39.022832 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 04:25:39.029462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 04:25:39.034457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:25:39.062078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 04:25:39.086132 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:25:39.110666 systemd-resolved[262]: Positive Trust Anchors: Sep 4 04:25:39.110680 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 04:25:39.110719 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 04:25:39.113643 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 4 04:25:39.115072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 04:25:39.128479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:25:39.275330 kernel: SCSI subsystem initialized Sep 4 04:25:39.289349 kernel: Loading iSCSI transport class v2.0-870. Sep 4 04:25:39.311334 kernel: iscsi: registered transport (tcp) Sep 4 04:25:39.336337 kernel: iscsi: registered transport (qla4xxx) Sep 4 04:25:39.336414 kernel: QLogic iSCSI HBA Driver Sep 4 04:25:39.361027 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 04:25:39.412749 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:25:39.414162 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 04:25:39.479113 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 04:25:39.480897 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 04:25:39.622331 kernel: raid6: avx2x4 gen() 29528 MB/s Sep 4 04:25:39.639328 kernel: raid6: avx2x2 gen() 27128 MB/s Sep 4 04:25:39.701433 kernel: raid6: avx2x1 gen() 23403 MB/s Sep 4 04:25:39.701506 kernel: raid6: using algorithm avx2x4 gen() 29528 MB/s Sep 4 04:25:39.719637 kernel: raid6: .... xor() 5683 MB/s, rmw enabled Sep 4 04:25:39.719681 kernel: raid6: using avx2x2 recovery algorithm Sep 4 04:25:39.741354 kernel: xor: automatically using best checksumming function avx Sep 4 04:25:39.937352 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 04:25:39.947638 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 04:25:39.951380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:25:39.998734 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 4 04:25:40.006638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:25:40.010187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 04:25:40.034268 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 4 04:25:40.067579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 04:25:40.075313 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 04:25:40.162571 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:25:40.164200 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 04:25:40.221332 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 04:25:40.227316 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 04:25:40.230310 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 04:25:40.236389 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 04:25:40.236408 kernel: GPT:9289727 != 19775487 Sep 4 04:25:40.236422 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 04:25:40.236436 kernel: GPT:9289727 != 19775487 Sep 4 04:25:40.236454 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 04:25:40.236466 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:40.243393 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 04:25:40.256086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 04:25:40.257730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:40.260381 kernel: libata version 3.00 loaded. Sep 4 04:25:40.260083 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:40.264372 kernel: AES CTR mode by8 optimization enabled Sep 4 04:25:40.265653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:40.268548 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:25:40.273789 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 04:25:40.273998 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 04:25:40.285612 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 04:25:40.288248 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 04:25:40.290263 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 04:25:40.305039 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 04:25:40.310545 kernel: scsi host0: ahci Sep 4 04:25:40.313338 kernel: scsi host1: ahci Sep 4 04:25:40.315315 kernel: scsi host2: ahci Sep 4 04:25:40.315774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:40.317579 kernel: scsi host3: ahci Sep 4 04:25:40.317757 kernel: scsi host4: ahci Sep 4 04:25:40.317919 kernel: scsi host5: ahci Sep 4 04:25:40.318983 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 4 04:25:40.319012 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 4 04:25:40.321083 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 4 04:25:40.322159 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 4 04:25:40.322185 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 4 04:25:40.323227 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 4 04:25:40.334593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 04:25:40.348890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 04:25:40.348993 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 04:25:40.372648 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 04:25:40.374983 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 04:25:40.455478 disk-uuid[635]: Primary Header is updated. Sep 4 04:25:40.455478 disk-uuid[635]: Secondary Entries is updated. Sep 4 04:25:40.455478 disk-uuid[635]: Secondary Header is updated. Sep 4 04:25:40.461306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:40.467346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:40.629879 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:40.629972 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:40.629989 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:40.631321 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 04:25:40.632313 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:40.633333 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:40.633364 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 04:25:40.634954 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 04:25:40.634976 kernel: ata3.00: applying bridge limits Sep 4 04:25:40.636557 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 04:25:40.636667 kernel: ata3.00: configured for UDMA/100 Sep 4 04:25:40.639321 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 04:25:40.686355 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 04:25:40.686722 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 04:25:40.701326 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 04:25:41.128631 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 04:25:41.130729 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 04:25:41.133530 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:25:41.135143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 04:25:41.140102 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 04:25:41.182479 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 04:25:41.473151 disk-uuid[636]: The operation has completed successfully. Sep 4 04:25:41.474627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:41.506243 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 04:25:41.506438 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 04:25:41.565777 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 04:25:41.597814 sh[664]: Success Sep 4 04:25:41.619847 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 04:25:41.619933 kernel: device-mapper: uevent: version 1.0.3 Sep 4 04:25:41.621338 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 04:25:41.634313 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 04:25:41.674962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 04:25:41.679131 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 04:25:41.701338 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 04:25:41.713173 kernel: BTRFS: device fsid c26d2db4-0109-42a5-bc6f-bbb834b82868 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (676) Sep 4 04:25:41.713268 kernel: BTRFS info (device dm-0): first mount of filesystem c26d2db4-0109-42a5-bc6f-bbb834b82868 Sep 4 04:25:41.713307 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:41.721404 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 04:25:41.721448 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 04:25:41.722992 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 04:25:41.723719 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 04:25:41.726123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 04:25:41.727452 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 04:25:41.730958 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 04:25:41.771321 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Sep 4 04:25:41.771390 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:41.772606 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:41.776336 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:25:41.776367 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:25:41.782320 kernel: BTRFS info (device vda6): last unmount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:41.783848 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 04:25:41.787686 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 04:25:41.974595 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 04:25:41.979523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 04:25:41.983085 ignition[754]: Ignition 2.22.0 Sep 4 04:25:41.983098 ignition[754]: Stage: fetch-offline Sep 4 04:25:41.983152 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:41.983163 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:41.983475 ignition[754]: parsed url from cmdline: "" Sep 4 04:25:41.983479 ignition[754]: no config URL provided Sep 4 04:25:41.983489 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 04:25:41.983500 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 4 04:25:41.983533 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 4 04:25:41.983541 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 04:25:42.026902 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 4 04:25:42.065877 systemd-networkd[852]: lo: Link UP Sep 4 04:25:42.065891 systemd-networkd[852]: lo: Gained carrier Sep 4 04:25:42.067959 systemd-networkd[852]: Enumeration completed Sep 4 04:25:42.068103 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 04:25:42.068484 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:42.068488 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 04:25:42.069008 systemd-networkd[852]: eth0: Link UP Sep 4 04:25:42.069905 systemd-networkd[852]: eth0: Gained carrier Sep 4 04:25:42.069917 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:42.070392 systemd[1]: Reached target network.target - Network. Sep 4 04:25:42.083031 ignition[754]: parsing config with SHA512: ab90138cb24d895c95ee9acd5c5d0a250a93cf6561a362bddd05876e57bcff9b38924946fa89de666b46088b8776a7e4c7e009d93a2372ddc65ecb415bd58edd Sep 4 04:25:42.087060 unknown[754]: fetched base config from "system" Sep 4 04:25:42.087079 unknown[754]: fetched user config from "qemu" Sep 4 04:25:42.088362 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 04:25:42.090338 ignition[754]: fetch-offline: fetch-offline passed Sep 4 04:25:42.091300 ignition[754]: Ignition finished successfully Sep 4 04:25:42.095796 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 04:25:42.096156 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 04:25:42.097270 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 04:25:42.138527 ignition[859]: Ignition 2.22.0 Sep 4 04:25:42.138543 ignition[859]: Stage: kargs Sep 4 04:25:42.138750 ignition[859]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:42.138766 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:42.139902 ignition[859]: kargs: kargs passed Sep 4 04:25:42.139961 ignition[859]: Ignition finished successfully Sep 4 04:25:42.149308 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 04:25:42.152040 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 04:25:42.211683 ignition[867]: Ignition 2.22.0 Sep 4 04:25:42.211696 ignition[867]: Stage: disks Sep 4 04:25:42.211854 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:42.211865 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:42.212780 ignition[867]: disks: disks passed Sep 4 04:25:42.216410 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 04:25:42.212845 ignition[867]: Ignition finished successfully Sep 4 04:25:42.218601 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 04:25:42.220965 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 04:25:42.221069 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 04:25:42.221688 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 04:25:42.222063 systemd[1]: Reached target basic.target - Basic System. Sep 4 04:25:42.224160 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 04:25:42.269794 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 04:25:42.281094 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 04:25:42.286835 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 04:25:42.454352 kernel: EXT4-fs (vda9): mounted filesystem d147a273-ffc0-4c78-a5f1-46a3b3f6b4ff r/w with ordered data mode. Quota mode: none. Sep 4 04:25:42.455664 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 04:25:42.456494 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 04:25:42.461319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 04:25:42.464035 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 04:25:42.466523 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 04:25:42.466599 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 04:25:42.468788 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 04:25:42.483748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 04:25:42.488250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 04:25:42.491977 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 4 04:25:42.494354 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:42.494411 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:42.498975 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:25:42.499008 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:25:42.502303 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 04:25:42.536798 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 04:25:42.541797 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 4 04:25:42.546849 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 04:25:42.552102 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 04:25:42.674055 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 04:25:42.676705 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 04:25:42.679562 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 04:25:42.701602 kernel: BTRFS info (device vda6): last unmount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:42.711546 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 04:25:42.717802 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 04:25:42.747925 ignition[1000]: INFO : Ignition 2.22.0 Sep 4 04:25:42.747925 ignition[1000]: INFO : Stage: mount Sep 4 04:25:42.749898 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:42.749898 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:42.752878 ignition[1000]: INFO : mount: mount passed Sep 4 04:25:42.753811 ignition[1000]: INFO : Ignition finished successfully Sep 4 04:25:42.757890 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 04:25:42.760618 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 04:25:42.786552 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 04:25:42.808329 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 4 04:25:42.810620 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:42.810652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:42.814314 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:25:42.814349 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:25:42.816104 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 04:25:42.855945 ignition[1029]: INFO : Ignition 2.22.0 Sep 4 04:25:42.855945 ignition[1029]: INFO : Stage: files Sep 4 04:25:42.911325 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:42.911325 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:42.911325 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 4 04:25:42.911325 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 04:25:42.911325 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 04:25:42.919687 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 04:25:42.919687 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 04:25:42.919687 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 04:25:42.919687 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 04:25:42.919687 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 04:25:42.914587 unknown[1029]: wrote ssh authorized keys file for user: core Sep 4 04:25:43.072954 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 04:25:43.359840 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 04:25:43.359840 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 04:25:43.364114 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 04:25:43.364114 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 04:25:43.368331 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 04:25:43.368331 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 04:25:43.368331 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 04:25:43.375385 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 04:25:43.375385 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 04:25:43.383583 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 04:25:43.387689 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 04:25:43.387689 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:25:43.393446 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:25:43.393446 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:25:43.393446 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 4 04:25:43.526822 systemd-networkd[852]: eth0: Gained IPv6LL Sep 4 04:25:43.936396 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 04:25:44.608666 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 04:25:44.608666 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 04:25:44.612816 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 04:25:44.853869 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 04:25:44.853869 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 04:25:44.853869 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 04:25:44.859770 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 04:25:44.859770 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 04:25:44.859770 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 04:25:44.859770 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 04:25:44.879816 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 04:25:44.885812 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 04:25:44.887550 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 04:25:44.887550 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 04:25:44.887550 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 04:25:44.887550 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 04:25:44.887550 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 04:25:44.887550 ignition[1029]: INFO : files: files passed Sep 4 04:25:44.887550 ignition[1029]: INFO : Ignition finished successfully Sep 4 04:25:44.892110 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 04:25:44.894888 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 04:25:44.900115 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 04:25:44.912297 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 04:25:44.912480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 04:25:44.917690 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 04:25:44.922973 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:25:44.924769 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:25:44.926373 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:25:44.925819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 04:25:44.927936 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 04:25:44.932080 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 04:25:45.024633 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 04:25:45.024815 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 04:25:45.026314 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 04:25:45.030437 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 04:25:45.033077 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 04:25:45.034276 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 04:25:45.070531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 04:25:45.072615 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 04:25:45.100372 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:25:45.103040 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:25:45.104701 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 04:25:45.106244 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 04:25:45.106455 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 04:25:45.110992 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 04:25:45.111168 systemd[1]: Stopped target basic.target - Basic System. Sep 4 04:25:45.113043 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 04:25:45.113399 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 04:25:45.117706 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 04:25:45.119937 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 04:25:45.122301 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 04:25:45.123391 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 04:25:45.123860 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 04:25:45.124220 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 04:25:45.124712 systemd[1]: Stopped target swap.target - Swaps. Sep 4 04:25:45.124997 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 04:25:45.125150 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 04:25:45.135579 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:25:45.137800 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:25:45.140209 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 04:25:45.141171 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:25:45.143799 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 04:25:45.144896 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 04:25:45.147202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 04:25:45.148328 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 04:25:45.150789 systemd[1]: Stopped target paths.target - Path Units. Sep 4 04:25:45.150925 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 04:25:45.154427 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:25:45.157388 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 04:25:45.158514 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 04:25:45.160402 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 04:25:45.160518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 04:25:45.162275 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 04:25:45.162401 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 04:25:45.163177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 04:25:45.163323 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 04:25:45.165031 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 04:25:45.165174 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 04:25:45.170171 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 04:25:45.172365 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 04:25:45.172538 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:25:45.191892 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 04:25:45.193673 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 04:25:45.193819 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:25:45.196029 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 04:25:45.196196 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 04:25:45.203136 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 04:25:45.205425 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 04:25:45.229341 ignition[1084]: INFO : Ignition 2.22.0 Sep 4 04:25:45.229341 ignition[1084]: INFO : Stage: umount Sep 4 04:25:45.229341 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:45.229341 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:45.234578 ignition[1084]: INFO : umount: umount passed Sep 4 04:25:45.234578 ignition[1084]: INFO : Ignition finished successfully Sep 4 04:25:45.234513 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 04:25:45.234689 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 04:25:45.237728 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 04:25:45.238674 systemd[1]: Stopped target network.target - Network. Sep 4 04:25:45.239880 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 04:25:45.239954 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 04:25:45.242182 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 04:25:45.242240 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 04:25:45.244384 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 04:25:45.244446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 04:25:45.247170 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 04:25:45.247223 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 04:25:45.248723 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 04:25:45.250822 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 04:25:45.255340 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 04:25:45.255541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 04:25:45.261107 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 04:25:45.261525 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 04:25:45.261673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 04:25:45.265109 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 04:25:45.265829 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 04:25:45.266346 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 04:25:45.266421 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:25:45.267905 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 04:25:45.271980 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 04:25:45.272057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 04:25:45.272588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 04:25:45.272644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:25:45.277638 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 04:25:45.277701 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 04:25:45.278474 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 04:25:45.278637 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:25:45.282407 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:25:45.283977 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 04:25:45.284062 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:25:45.306766 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 04:25:45.307042 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 04:25:45.312541 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 04:25:45.312809 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:25:45.315667 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 04:25:45.315732 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 04:25:45.316730 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 04:25:45.316791 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:25:45.320084 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 04:25:45.320162 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 04:25:45.323332 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 04:25:45.323393 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 04:25:45.326625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 04:25:45.326695 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 04:25:45.334256 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 04:25:45.337882 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 04:25:45.338032 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:25:45.343377 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 04:25:45.343463 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:25:45.345872 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 04:25:45.345952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:45.351752 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 04:25:45.351827 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 04:25:45.351903 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:25:45.365104 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 04:25:45.365303 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 04:25:45.384070 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 04:25:45.384298 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 04:25:45.385805 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 04:25:45.387273 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 04:25:45.387365 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 04:25:45.391929 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 04:25:45.422684 systemd[1]: Switching root. Sep 4 04:25:45.465366 systemd-journald[220]: Journal stopped Sep 4 04:25:47.204952 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 4 04:25:47.205029 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 04:25:47.205046 kernel: SELinux: policy capability open_perms=1 Sep 4 04:25:47.205061 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 04:25:47.205076 kernel: SELinux: policy capability always_check_network=0 Sep 4 04:25:47.205091 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 04:25:47.205121 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 04:25:47.205137 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 04:25:47.205152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 04:25:47.205166 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 04:25:47.205181 kernel: audit: type=1403 audit(1756959946.126:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 04:25:47.205203 systemd[1]: Successfully loaded SELinux policy in 281.766ms. Sep 4 04:25:47.205248 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.535ms. Sep 4 04:25:47.205265 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 04:25:47.206508 systemd[1]: Detected virtualization kvm. Sep 4 04:25:47.206539 systemd[1]: Detected architecture x86-64. Sep 4 04:25:47.206555 systemd[1]: Detected first boot. Sep 4 04:25:47.206577 systemd[1]: Initializing machine ID from VM UUID. Sep 4 04:25:47.206593 zram_generator::config[1130]: No configuration found. Sep 4 04:25:47.206610 kernel: Guest personality initialized and is inactive Sep 4 04:25:47.206626 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 04:25:47.206640 kernel: Initialized host personality Sep 4 04:25:47.206655 kernel: NET: Registered PF_VSOCK protocol family Sep 4 04:25:47.206677 systemd[1]: Populated /etc with preset unit settings. Sep 4 04:25:47.206694 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 04:25:47.206710 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 04:25:47.206726 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 04:25:47.206741 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 04:25:47.206757 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 04:25:47.206773 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 04:25:47.206789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 04:25:47.206804 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 04:25:47.206826 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 04:25:47.206843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 04:25:47.207001 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 04:25:47.207017 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 04:25:47.207033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:25:47.207055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:25:47.207071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 04:25:47.207088 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 04:25:47.207114 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 04:25:47.207137 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 04:25:47.207155 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 04:25:47.207171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:25:47.207187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:25:47.207202 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 04:25:47.207218 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 04:25:47.207240 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 04:25:47.207261 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 04:25:47.209329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:25:47.209363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 04:25:47.209380 systemd[1]: Reached target slices.target - Slice Units. Sep 4 04:25:47.209396 systemd[1]: Reached target swap.target - Swaps. Sep 4 04:25:47.209411 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 04:25:47.209427 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 04:25:47.209443 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 04:25:47.209458 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:25:47.209473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 04:25:47.209499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:25:47.209514 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 04:25:47.209530 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 04:25:47.209546 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 04:25:47.209562 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 04:25:47.209578 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:47.209594 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 04:25:47.209610 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 04:25:47.209632 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 04:25:47.209649 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 04:25:47.209665 systemd[1]: Reached target machines.target - Containers. Sep 4 04:25:47.209680 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 04:25:47.209696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:47.209713 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 04:25:47.209729 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 04:25:47.209798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:47.209814 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 04:25:47.209836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:25:47.209852 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 04:25:47.209868 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:25:47.209884 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 04:25:47.209900 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 04:25:47.209916 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 04:25:47.209931 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 04:25:47.209947 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 04:25:47.209970 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:47.209986 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 04:25:47.210001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 04:25:47.210018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 04:25:47.210034 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 04:25:47.210049 kernel: loop: module loaded Sep 4 04:25:47.210063 kernel: fuse: init (API version 7.41) Sep 4 04:25:47.210079 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 04:25:47.210138 systemd-journald[1195]: Collecting audit messages is disabled. Sep 4 04:25:47.210176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 04:25:47.210193 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 04:25:47.210218 systemd[1]: Stopped verity-setup.service. Sep 4 04:25:47.210234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:47.210256 systemd-journald[1195]: Journal started Sep 4 04:25:47.210311 systemd-journald[1195]: Runtime Journal (/run/log/journal/21381edcb99a4f9f88b284d893dc2b61) is 6M, max 48.5M, 42.4M free. Sep 4 04:25:46.909844 systemd[1]: Queued start job for default target multi-user.target. Sep 4 04:25:46.931245 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 04:25:46.931929 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 04:25:47.213174 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 04:25:47.213808 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 04:25:47.215060 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 04:25:47.216310 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 04:25:47.220462 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 04:25:47.222037 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 04:25:47.223631 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 04:25:47.225186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:25:47.227344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 04:25:47.227696 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 04:25:47.229392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:47.229712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:47.231540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:25:47.231854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:25:47.233565 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 04:25:47.233857 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 04:25:47.235509 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:25:47.235895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:25:47.237935 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 04:25:47.240326 kernel: ACPI: bus type drm_connector registered Sep 4 04:25:47.241925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:25:47.244867 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 04:25:47.245193 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 04:25:47.247085 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 04:25:47.251871 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 04:25:47.270497 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 04:25:47.273927 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 04:25:47.276893 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 04:25:47.278295 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 04:25:47.278340 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 04:25:47.280676 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 04:25:47.287934 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 04:25:47.289082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:47.337753 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 04:25:47.340433 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 04:25:47.341742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:25:47.343951 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 04:25:47.345568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 04:25:47.352417 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:25:47.356492 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 04:25:47.361932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:25:47.362614 systemd-journald[1195]: Time spent on flushing to /var/log/journal/21381edcb99a4f9f88b284d893dc2b61 is 35.509ms for 1067 entries. Sep 4 04:25:47.362614 systemd-journald[1195]: System Journal (/var/log/journal/21381edcb99a4f9f88b284d893dc2b61) is 8M, max 195.6M, 187.6M free. Sep 4 04:25:47.414961 systemd-journald[1195]: Received client request to flush runtime journal. Sep 4 04:25:47.415005 kernel: loop0: detected capacity change from 0 to 221472 Sep 4 04:25:47.365605 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 04:25:47.371091 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 04:25:47.377799 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 04:25:47.381984 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 04:25:47.386883 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 04:25:47.403646 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:25:47.417328 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 04:25:47.419474 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 04:25:47.424886 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 04:25:47.428070 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 04:25:47.452325 kernel: loop1: detected capacity change from 0 to 128016 Sep 4 04:25:47.453934 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 04:25:47.472955 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 04:25:47.478312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 04:25:47.490586 kernel: loop2: detected capacity change from 0 to 110984 Sep 4 04:25:47.516262 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 4 04:25:47.516299 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 4 04:25:47.526386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:25:47.554345 kernel: loop3: detected capacity change from 0 to 221472 Sep 4 04:25:47.568315 kernel: loop4: detected capacity change from 0 to 128016 Sep 4 04:25:47.580349 kernel: loop5: detected capacity change from 0 to 110984 Sep 4 04:25:47.591551 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 04:25:47.592589 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 4 04:25:47.598897 systemd[1]: Reload requested from client PID 1242 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 04:25:47.598917 systemd[1]: Reloading... Sep 4 04:25:47.699319 zram_generator::config[1303]: No configuration found. Sep 4 04:25:47.860027 ldconfig[1237]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 04:25:47.923679 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 04:25:47.923932 systemd[1]: Reloading finished in 324 ms. Sep 4 04:25:47.956500 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 04:25:47.969841 systemd[1]: Starting ensure-sysext.service... Sep 4 04:25:48.044214 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 04:25:48.062595 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 04:25:48.121417 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 4 04:25:48.121434 systemd[1]: Reloading... Sep 4 04:25:48.128574 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 04:25:48.128959 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 04:25:48.129317 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 04:25:48.129577 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 04:25:48.130784 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 04:25:48.131166 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 4 04:25:48.131334 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 4 04:25:48.135870 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 04:25:48.135978 systemd-tmpfiles[1337]: Skipping /boot Sep 4 04:25:48.146720 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 04:25:48.146739 systemd-tmpfiles[1337]: Skipping /boot Sep 4 04:25:48.216382 zram_generator::config[1368]: No configuration found. Sep 4 04:25:48.400955 systemd[1]: Reloading finished in 279 ms. Sep 4 04:25:48.440227 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:25:48.457258 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 04:25:48.478525 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 04:25:48.483525 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 04:25:48.495020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 04:25:48.498599 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 04:25:48.502151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:48.502575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:48.504223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:48.510655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:25:48.514042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:25:48.515347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:48.515538 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:48.515707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:48.517505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:48.517738 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:48.519523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:25:48.519745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:25:48.523201 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:25:48.523447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:25:48.529523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:48.529706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:48.531046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:48.583736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:25:48.586576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:25:48.587805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:48.588008 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:48.590556 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 04:25:48.611161 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:48.612777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:48.614464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:48.619621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:25:48.619954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:25:48.621850 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:25:48.622118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:25:48.629002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:25:48.629271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 04:25:48.636808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:48.637141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:48.639221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:48.641846 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 04:25:48.646628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:25:48.657618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:25:48.658843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:48.659019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:48.659231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:48.662042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:48.662352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:48.664952 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:25:48.665185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:25:48.667277 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 04:25:48.669863 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 04:25:48.670831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 04:25:48.681109 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 04:25:48.683225 systemd[1]: Finished ensure-sysext.service. Sep 4 04:25:48.684617 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 04:25:48.688160 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:25:48.688480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:25:48.695257 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:25:48.695367 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 04:25:48.697553 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 04:25:48.732004 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 04:25:48.733632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 04:25:48.786753 systemd-resolved[1406]: Positive Trust Anchors: Sep 4 04:25:48.786775 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 04:25:48.787160 augenrules[1458]: No rules Sep 4 04:25:48.786818 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 04:25:48.789089 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 04:25:48.789403 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 04:25:48.790961 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 04:25:48.791352 systemd-resolved[1406]: Defaulting to hostname 'linux'. Sep 4 04:25:48.792727 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 04:25:48.793899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 04:25:48.795082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:25:48.832755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 04:25:48.836248 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:25:48.839138 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 04:25:48.879869 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 04:25:48.897135 systemd-udevd[1466]: Using default interface naming scheme 'v255'. Sep 4 04:25:48.918866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:25:48.931233 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 04:25:48.932581 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 04:25:48.933904 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 04:25:48.935141 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 04:25:48.936479 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 04:25:48.937695 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 04:25:48.938994 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 04:25:48.940506 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 04:25:48.940545 systemd[1]: Reached target paths.target - Path Units. Sep 4 04:25:48.941741 systemd[1]: Reached target timers.target - Timer Units. Sep 4 04:25:48.943912 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 04:25:48.946978 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 04:25:48.950562 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 04:25:48.952699 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 04:25:48.954156 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 04:25:48.958730 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 04:25:48.960841 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 04:25:48.967590 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 04:25:48.969627 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 04:25:48.973626 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 04:25:48.975020 systemd[1]: Reached target basic.target - Basic System. Sep 4 04:25:48.976365 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 04:25:48.976396 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 04:25:48.977791 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 04:25:48.982512 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 04:25:48.987619 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 04:25:48.990487 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 04:25:48.992600 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 04:25:49.000591 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 04:25:49.004538 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 04:25:49.007080 jq[1498]: false Sep 4 04:25:49.007641 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 04:25:49.012557 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 04:25:49.015591 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 04:25:49.018586 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Refreshing passwd entry cache Sep 4 04:25:49.018831 oslogin_cache_refresh[1500]: Refreshing passwd entry cache Sep 4 04:25:49.022084 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 04:25:49.024225 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 04:25:49.024343 oslogin_cache_refresh[1500]: Failure getting users, quitting Sep 4 04:25:49.025099 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Failure getting users, quitting Sep 4 04:25:49.025099 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 04:25:49.025099 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Refreshing group entry cache Sep 4 04:25:49.025099 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Failure getting groups, quitting Sep 4 04:25:49.025099 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 04:25:49.024853 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 04:25:49.024360 oslogin_cache_refresh[1500]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 04:25:49.025848 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 04:25:49.024409 oslogin_cache_refresh[1500]: Refreshing group entry cache Sep 4 04:25:49.024886 oslogin_cache_refresh[1500]: Failure getting groups, quitting Sep 4 04:25:49.024896 oslogin_cache_refresh[1500]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 04:25:49.030470 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 04:25:49.033175 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 04:25:49.034923 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 04:25:49.035398 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 04:25:49.035848 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 04:25:49.036259 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 04:25:49.039959 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 04:25:49.040427 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 04:25:49.043720 extend-filesystems[1499]: Found /dev/vda6 Sep 4 04:25:49.055461 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 04:25:49.056005 jq[1512]: true Sep 4 04:25:49.055840 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 04:25:49.069895 extend-filesystems[1499]: Found /dev/vda9 Sep 4 04:25:49.074446 jq[1527]: true Sep 4 04:25:49.078614 tar[1516]: linux-amd64/helm Sep 4 04:25:49.084277 update_engine[1511]: I20250904 04:25:49.083966 1511 main.cc:92] Flatcar Update Engine starting Sep 4 04:25:49.084638 extend-filesystems[1499]: Checking size of /dev/vda9 Sep 4 04:25:49.110902 dbus-daemon[1496]: [system] SELinux support is enabled Sep 4 04:25:49.111147 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 04:25:49.114959 update_engine[1511]: I20250904 04:25:49.114900 1511 update_check_scheduler.cc:74] Next update check in 9m43s Sep 4 04:25:49.120521 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 04:25:49.120563 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 04:25:49.121847 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 04:25:49.121878 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 04:25:49.126315 extend-filesystems[1499]: Resized partition /dev/vda9 Sep 4 04:25:49.127072 systemd[1]: Started update-engine.service - Update Engine. Sep 4 04:25:49.131722 extend-filesystems[1556]: resize2fs 1.47.3 (8-Jul-2025) Sep 4 04:25:49.136445 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 04:25:49.188887 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 04:25:49.231194 systemd-logind[1509]: New seat seat0. Sep 4 04:25:49.232399 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 04:25:49.297160 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 04:25:49.309896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 04:25:49.318307 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 04:25:49.315656 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 04:25:49.332755 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 04:25:49.354711 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 04:25:49.401877 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 04:25:49.402341 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 04:25:49.407335 systemd-networkd[1492]: lo: Link UP Sep 4 04:25:49.407347 systemd-networkd[1492]: lo: Gained carrier Sep 4 04:25:49.408308 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 4 04:25:49.413310 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 04:25:49.441124 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 04:25:49.441175 kernel: ACPI: button: Power Button [PWRF] Sep 4 04:25:49.441190 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 4 04:25:49.441537 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 04:25:49.441717 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 04:25:49.418242 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 04:25:49.420232 systemd-networkd[1492]: Enumeration completed Sep 4 04:25:49.420432 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 04:25:49.420768 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:49.420773 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 04:25:49.423969 systemd-networkd[1492]: eth0: Link UP Sep 4 04:25:49.424159 systemd[1]: Reached target network.target - Network. Sep 4 04:25:49.424327 systemd-networkd[1492]: eth0: Gained carrier Sep 4 04:25:49.424345 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:49.428945 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 04:25:49.439349 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 04:25:49.440871 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 04:25:49.442914 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 04:25:49.442914 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 04:25:49.442914 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 04:25:49.448424 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Sep 4 04:25:49.449491 bash[1554]: Updated "/home/core/.ssh/authorized_keys" Sep 4 04:25:49.446732 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Sep 4 04:25:49.447682 systemd-timesyncd[1453]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 04:25:49.447730 systemd-timesyncd[1453]: Initial clock synchronization to Thu 2025-09-04 04:25:49.311081 UTC. Sep 4 04:25:49.450040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 04:25:49.455006 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 04:25:49.458077 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 04:25:49.462337 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 04:25:49.464501 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 04:25:49.466646 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 04:25:49.473350 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 04:25:49.495232 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 04:25:49.508314 (ntainerd)[1612]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 04:25:49.510879 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 04:25:49.515590 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 04:25:49.517013 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 04:25:49.519661 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 04:25:49.532431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:49.543537 tar[1516]: linux-amd64/LICENSE Sep 4 04:25:49.543694 tar[1516]: linux-amd64/README.md Sep 4 04:25:49.599943 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 04:25:49.602490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 04:25:49.603054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:49.606418 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 04:25:49.607759 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:25:49.615183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:49.616755 kernel: kvm_amd: TSC scaling supported Sep 4 04:25:49.616856 kernel: kvm_amd: Nested Virtualization enabled Sep 4 04:25:49.616889 kernel: kvm_amd: Nested Paging enabled Sep 4 04:25:49.616912 kernel: kvm_amd: LBR virtualization supported Sep 4 04:25:49.616939 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 04:25:49.616966 kernel: kvm_amd: Virtual GIF supported Sep 4 04:25:49.625106 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 04:25:49.652319 kernel: EDAC MC: Ver: 3.0.0 Sep 4 04:25:49.689323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:49.723817 containerd[1612]: time="2025-09-04T04:25:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 04:25:49.724512 containerd[1612]: time="2025-09-04T04:25:49.724475218Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 04:25:49.735969 containerd[1612]: time="2025-09-04T04:25:49.735898566Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.538µs" Sep 4 04:25:49.735969 containerd[1612]: time="2025-09-04T04:25:49.735944222Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 04:25:49.735969 containerd[1612]: time="2025-09-04T04:25:49.735962376Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 04:25:49.736226 containerd[1612]: time="2025-09-04T04:25:49.736197737Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 04:25:49.736226 containerd[1612]: time="2025-09-04T04:25:49.736217444Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 04:25:49.736304 containerd[1612]: time="2025-09-04T04:25:49.736244846Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 04:25:49.736365 containerd[1612]: time="2025-09-04T04:25:49.736336347Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 04:25:49.736365 containerd[1612]: time="2025-09-04T04:25:49.736356745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 04:25:49.736726 containerd[1612]: time="2025-09-04T04:25:49.736672097Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 04:25:49.736726 containerd[1612]: time="2025-09-04T04:25:49.736692645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 04:25:49.736726 containerd[1612]: time="2025-09-04T04:25:49.736703335Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 04:25:49.736726 containerd[1612]: time="2025-09-04T04:25:49.736711982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 04:25:49.736852 containerd[1612]: time="2025-09-04T04:25:49.736812470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 04:25:49.737093 containerd[1612]: time="2025-09-04T04:25:49.737056187Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 04:25:49.737127 containerd[1612]: time="2025-09-04T04:25:49.737092104Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 04:25:49.737127 containerd[1612]: time="2025-09-04T04:25:49.737103005Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 04:25:49.737167 containerd[1612]: time="2025-09-04T04:25:49.737138381Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 04:25:49.737453 containerd[1612]: time="2025-09-04T04:25:49.737421632Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 04:25:49.737540 containerd[1612]: time="2025-09-04T04:25:49.737514156Z" level=info msg="metadata content store policy set" policy=shared Sep 4 04:25:49.752416 containerd[1612]: time="2025-09-04T04:25:49.752338742Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 04:25:49.752416 containerd[1612]: time="2025-09-04T04:25:49.752414604Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752433650Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752452415Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752468876Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752480107Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752494063Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752508099Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752522356Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752536953Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752550809Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 04:25:49.752579 containerd[1612]: time="2025-09-04T04:25:49.752576888Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 04:25:49.752809 containerd[1612]: time="2025-09-04T04:25:49.752781021Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 04:25:49.752841 containerd[1612]: time="2025-09-04T04:25:49.752810516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 04:25:49.752841 containerd[1612]: time="2025-09-04T04:25:49.752831566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 04:25:49.752888 containerd[1612]: time="2025-09-04T04:25:49.752854158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 04:25:49.752888 containerd[1612]: time="2025-09-04T04:25:49.752869447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 04:25:49.752888 containerd[1612]: time="2025-09-04T04:25:49.752881940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 04:25:49.752949 containerd[1612]: time="2025-09-04T04:25:49.752899163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 04:25:49.752949 containerd[1612]: time="2025-09-04T04:25:49.752913419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 04:25:49.752949 containerd[1612]: time="2025-09-04T04:25:49.752926955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 04:25:49.752949 containerd[1612]: time="2025-09-04T04:25:49.752939779Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 04:25:49.753028 containerd[1612]: time="2025-09-04T04:25:49.752952352Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 04:25:49.753067 containerd[1612]: time="2025-09-04T04:25:49.753025479Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 04:25:49.753067 containerd[1612]: time="2025-09-04T04:25:49.753060515Z" level=info msg="Start snapshots syncer" Sep 4 04:25:49.753124 containerd[1612]: time="2025-09-04T04:25:49.753095962Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 04:25:49.753502 containerd[1612]: time="2025-09-04T04:25:49.753437592Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 04:25:49.753617 containerd[1612]: time="2025-09-04T04:25:49.753512593Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 04:25:49.753641 containerd[1612]: time="2025-09-04T04:25:49.753613983Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 04:25:49.753770 containerd[1612]: time="2025-09-04T04:25:49.753742313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 04:25:49.753806 containerd[1612]: time="2025-09-04T04:25:49.753776487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 04:25:49.753806 containerd[1612]: time="2025-09-04T04:25:49.753793069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 04:25:49.753851 containerd[1612]: time="2025-09-04T04:25:49.753813898Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 04:25:49.753851 containerd[1612]: time="2025-09-04T04:25:49.753829347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 04:25:49.753851 containerd[1612]: time="2025-09-04T04:25:49.753841960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 04:25:49.753912 containerd[1612]: time="2025-09-04T04:25:49.753856197Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 04:25:49.753912 containerd[1612]: time="2025-09-04T04:25:49.753883228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 04:25:49.753912 containerd[1612]: time="2025-09-04T04:25:49.753896673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 04:25:49.753912 containerd[1612]: time="2025-09-04T04:25:49.753910018Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 04:25:49.753986 containerd[1612]: time="2025-09-04T04:25:49.753949041Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 04:25:49.753986 containerd[1612]: time="2025-09-04T04:25:49.753967556Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 04:25:49.753986 containerd[1612]: time="2025-09-04T04:25:49.753979308Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 04:25:49.754060 containerd[1612]: time="2025-09-04T04:25:49.753990419Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 04:25:49.754060 containerd[1612]: time="2025-09-04T04:25:49.754001219Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 04:25:49.754060 containerd[1612]: time="2025-09-04T04:25:49.754012580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 04:25:49.754060 containerd[1612]: time="2025-09-04T04:25:49.754024813Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 04:25:49.754060 containerd[1612]: time="2025-09-04T04:25:49.754058156Z" level=info msg="runtime interface created" Sep 4 04:25:49.754161 containerd[1612]: time="2025-09-04T04:25:49.754067613Z" level=info msg="created NRI interface" Sep 4 04:25:49.754161 containerd[1612]: time="2025-09-04T04:25:49.754079255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 04:25:49.754161 containerd[1612]: time="2025-09-04T04:25:49.754103861Z" level=info msg="Connect containerd service" Sep 4 04:25:49.754161 containerd[1612]: time="2025-09-04T04:25:49.754138456Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 04:25:49.755175 containerd[1612]: time="2025-09-04T04:25:49.755133512Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 04:25:49.844167 containerd[1612]: time="2025-09-04T04:25:49.844100715Z" level=info msg="Start subscribing containerd event" Sep 4 04:25:49.844167 containerd[1612]: time="2025-09-04T04:25:49.844164345Z" level=info msg="Start recovering state" Sep 4 04:25:49.844382 containerd[1612]: time="2025-09-04T04:25:49.844275523Z" level=info msg="Start event monitor" Sep 4 04:25:49.844382 containerd[1612]: time="2025-09-04T04:25:49.844309417Z" level=info msg="Start cni network conf syncer for default" Sep 4 04:25:49.844382 containerd[1612]: time="2025-09-04T04:25:49.844319205Z" level=info msg="Start streaming server" Sep 4 04:25:49.844382 containerd[1612]: time="2025-09-04T04:25:49.844336558Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 04:25:49.844382 containerd[1612]: time="2025-09-04T04:25:49.844346136Z" level=info msg="runtime interface starting up..." Sep 4 04:25:49.844382 containerd[1612]: time="2025-09-04T04:25:49.844353660Z" level=info msg="starting plugins..." Sep 4 04:25:49.844382 containerd[1612]: time="2025-09-04T04:25:49.844364009Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 04:25:49.844525 containerd[1612]: time="2025-09-04T04:25:49.844372525Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 04:25:49.844525 containerd[1612]: time="2025-09-04T04:25:49.844452535Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 04:25:49.844717 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 04:25:49.845143 containerd[1612]: time="2025-09-04T04:25:49.845072297Z" level=info msg="containerd successfully booted in 0.121904s" Sep 4 04:25:50.694511 systemd-networkd[1492]: eth0: Gained IPv6LL Sep 4 04:25:50.699308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 04:25:50.702416 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 04:25:50.706456 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 04:25:50.710468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:25:50.714318 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 04:25:50.756525 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 04:25:50.759012 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 04:25:50.759407 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 04:25:50.762043 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 04:25:52.002557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:25:52.004778 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 04:25:52.006401 systemd[1]: Startup finished in 3.599s (kernel) + 7.243s (initrd) + 6.160s (userspace) = 17.004s. Sep 4 04:25:52.013868 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:25:52.439728 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 04:25:52.441021 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:46094.service - OpenSSH per-connection server daemon (10.0.0.1:46094). Sep 4 04:25:52.544682 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 46094 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:25:52.546610 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:25:52.554848 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 04:25:52.556164 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 04:25:52.563639 systemd-logind[1509]: New session 1 of user core. Sep 4 04:25:52.597109 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 04:25:52.601017 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 04:25:52.613597 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 04:25:52.618434 systemd-logind[1509]: New session c1 of user core. Sep 4 04:25:52.896054 systemd[1693]: Queued start job for default target default.target. Sep 4 04:25:52.906182 systemd[1693]: Created slice app.slice - User Application Slice. Sep 4 04:25:52.906222 systemd[1693]: Reached target paths.target - Paths. Sep 4 04:25:52.906342 systemd[1693]: Reached target timers.target - Timers. Sep 4 04:25:52.908566 systemd[1693]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 04:25:52.923515 systemd[1693]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 04:25:52.923663 systemd[1693]: Reached target sockets.target - Sockets. Sep 4 04:25:52.923717 systemd[1693]: Reached target basic.target - Basic System. Sep 4 04:25:52.923765 systemd[1693]: Reached target default.target - Main User Target. Sep 4 04:25:52.923803 systemd[1693]: Startup finished in 198ms. Sep 4 04:25:52.925088 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 04:25:52.936651 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 04:25:53.006585 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). Sep 4 04:25:53.085201 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:25:53.087548 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:25:53.094639 systemd-logind[1509]: New session 2 of user core. Sep 4 04:25:53.179178 kubelet[1676]: E0904 04:25:53.178910 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:25:53.184631 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 04:25:53.185061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:25:53.185252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:25:53.185631 systemd[1]: kubelet.service: Consumed 2.222s CPU time, 265.5M memory peak. Sep 4 04:25:53.240126 sshd[1708]: Connection closed by 10.0.0.1 port 46096 Sep 4 04:25:53.240489 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Sep 4 04:25:53.251569 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:46096.service: Deactivated successfully. Sep 4 04:25:53.253663 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 04:25:53.254481 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Sep 4 04:25:53.257646 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:46110.service - OpenSSH per-connection server daemon (10.0.0.1:46110). Sep 4 04:25:53.258228 systemd-logind[1509]: Removed session 2. Sep 4 04:25:53.318260 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 46110 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:25:53.319986 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:25:53.325295 systemd-logind[1509]: New session 3 of user core. Sep 4 04:25:53.335435 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 04:25:53.387697 sshd[1717]: Connection closed by 10.0.0.1 port 46110 Sep 4 04:25:53.388163 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Sep 4 04:25:53.401455 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:46110.service: Deactivated successfully. Sep 4 04:25:53.403800 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 04:25:53.404642 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Sep 4 04:25:53.407904 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:46116.service - OpenSSH per-connection server daemon (10.0.0.1:46116). Sep 4 04:25:53.408899 systemd-logind[1509]: Removed session 3. Sep 4 04:25:53.478083 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 46116 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:25:53.479886 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:25:53.484871 systemd-logind[1509]: New session 4 of user core. Sep 4 04:25:53.499482 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 04:25:53.553846 sshd[1726]: Connection closed by 10.0.0.1 port 46116 Sep 4 04:25:53.554219 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 4 04:25:53.566592 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:46116.service: Deactivated successfully. Sep 4 04:25:53.568687 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 04:25:53.569529 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Sep 4 04:25:53.573136 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:46122.service - OpenSSH per-connection server daemon (10.0.0.1:46122). Sep 4 04:25:53.573713 systemd-logind[1509]: Removed session 4. Sep 4 04:25:53.627951 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 46122 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:25:53.629212 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:25:53.633864 systemd-logind[1509]: New session 5 of user core. Sep 4 04:25:53.643416 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 04:25:53.701498 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 04:25:53.701834 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:25:53.720943 sudo[1737]: pam_unix(sudo:session): session closed for user root Sep 4 04:25:53.722808 sshd[1736]: Connection closed by 10.0.0.1 port 46122 Sep 4 04:25:53.723256 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 4 04:25:53.736835 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:46122.service: Deactivated successfully. Sep 4 04:25:53.738668 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 04:25:53.739361 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Sep 4 04:25:53.742583 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:46130.service - OpenSSH per-connection server daemon (10.0.0.1:46130). Sep 4 04:25:53.743128 systemd-logind[1509]: Removed session 5. Sep 4 04:25:53.813009 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 46130 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:25:53.814712 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:25:53.819865 systemd-logind[1509]: New session 6 of user core. Sep 4 04:25:53.829406 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 04:25:53.886147 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 04:25:53.886539 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:25:53.895779 sudo[1748]: pam_unix(sudo:session): session closed for user root Sep 4 04:25:53.903032 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 04:25:53.903392 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:25:53.913470 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 04:25:53.970771 augenrules[1770]: No rules Sep 4 04:25:53.972975 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 04:25:53.973334 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 04:25:53.974681 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 4 04:25:53.976653 sshd[1746]: Connection closed by 10.0.0.1 port 46130 Sep 4 04:25:53.977115 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 4 04:25:53.986889 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:46130.service: Deactivated successfully. Sep 4 04:25:53.989367 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 04:25:53.990364 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Sep 4 04:25:53.993755 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:46132.service - OpenSSH per-connection server daemon (10.0.0.1:46132). Sep 4 04:25:53.994404 systemd-logind[1509]: Removed session 6. Sep 4 04:25:54.065537 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 46132 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:25:54.067172 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:25:54.072055 systemd-logind[1509]: New session 7 of user core. Sep 4 04:25:54.082442 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 04:25:54.137531 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 04:25:54.137861 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:25:54.840738 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 04:25:54.863722 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 04:25:55.405021 dockerd[1803]: time="2025-09-04T04:25:55.404926149Z" level=info msg="Starting up" Sep 4 04:25:55.405973 dockerd[1803]: time="2025-09-04T04:25:55.405943504Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 04:25:55.428397 dockerd[1803]: time="2025-09-04T04:25:55.428343220Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 04:25:57.396375 dockerd[1803]: time="2025-09-04T04:25:57.396261592Z" level=info msg="Loading containers: start." Sep 4 04:25:57.418411 kernel: Initializing XFRM netlink socket Sep 4 04:25:57.881341 systemd-networkd[1492]: docker0: Link UP Sep 4 04:25:57.962192 dockerd[1803]: time="2025-09-04T04:25:57.962089481Z" level=info msg="Loading containers: done." Sep 4 04:25:57.984008 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3343128137-merged.mount: Deactivated successfully. Sep 4 04:25:57.992004 dockerd[1803]: time="2025-09-04T04:25:57.991940823Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 04:25:57.992150 dockerd[1803]: time="2025-09-04T04:25:57.992058124Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 04:25:57.992201 dockerd[1803]: time="2025-09-04T04:25:57.992189235Z" level=info msg="Initializing buildkit" Sep 4 04:25:58.029816 dockerd[1803]: time="2025-09-04T04:25:58.029768840Z" level=info msg="Completed buildkit initialization" Sep 4 04:25:58.034964 dockerd[1803]: time="2025-09-04T04:25:58.034929998Z" level=info msg="Daemon has completed initialization" Sep 4 04:25:58.035061 dockerd[1803]: time="2025-09-04T04:25:58.034993991Z" level=info msg="API listen on /run/docker.sock" Sep 4 04:25:58.035165 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 04:25:59.024039 containerd[1612]: time="2025-09-04T04:25:59.023980575Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 04:26:00.220530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580091920.mount: Deactivated successfully. Sep 4 04:26:01.504057 containerd[1612]: time="2025-09-04T04:26:01.503932597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:01.504914 containerd[1612]: time="2025-09-04T04:26:01.504844284Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 4 04:26:01.506526 containerd[1612]: time="2025-09-04T04:26:01.506466824Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:01.510449 containerd[1612]: time="2025-09-04T04:26:01.510378163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:01.512100 containerd[1612]: time="2025-09-04T04:26:01.512011947Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.487979418s" Sep 4 04:26:01.512100 containerd[1612]: time="2025-09-04T04:26:01.512060543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 4 04:26:01.513007 containerd[1612]: time="2025-09-04T04:26:01.512967920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 04:26:03.193023 containerd[1612]: time="2025-09-04T04:26:03.192927977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:03.194048 containerd[1612]: time="2025-09-04T04:26:03.193955569Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 4 04:26:03.195093 containerd[1612]: time="2025-09-04T04:26:03.195050877Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:03.198116 containerd[1612]: time="2025-09-04T04:26:03.198055712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:03.199206 containerd[1612]: time="2025-09-04T04:26:03.199154425Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.686145046s" Sep 4 04:26:03.199301 containerd[1612]: time="2025-09-04T04:26:03.199211241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 4 04:26:03.200143 containerd[1612]: time="2025-09-04T04:26:03.199917358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 04:26:03.435725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 04:26:03.437575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:03.894133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:03.914916 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:26:04.260210 kubelet[2089]: E0904 04:26:04.260000 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:26:04.266983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:26:04.267230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:26:04.267832 systemd[1]: kubelet.service: Consumed 277ms CPU time, 111.1M memory peak. Sep 4 04:26:07.827189 containerd[1612]: time="2025-09-04T04:26:07.827077352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:07.852878 containerd[1612]: time="2025-09-04T04:26:07.852800374Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 4 04:26:07.898394 containerd[1612]: time="2025-09-04T04:26:07.898327222Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:07.974229 containerd[1612]: time="2025-09-04T04:26:07.974126052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:07.975648 containerd[1612]: time="2025-09-04T04:26:07.975608276Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 4.775642527s" Sep 4 04:26:07.975703 containerd[1612]: time="2025-09-04T04:26:07.975648716Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 4 04:26:07.977096 containerd[1612]: time="2025-09-04T04:26:07.976725586Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 04:26:12.012995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579950510.mount: Deactivated successfully. Sep 4 04:26:12.833785 containerd[1612]: time="2025-09-04T04:26:12.833664409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:12.834994 containerd[1612]: time="2025-09-04T04:26:12.834944848Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 4 04:26:12.836844 containerd[1612]: time="2025-09-04T04:26:12.836796370Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:12.841307 containerd[1612]: time="2025-09-04T04:26:12.840210289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:12.841307 containerd[1612]: time="2025-09-04T04:26:12.841162750Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 4.864392495s" Sep 4 04:26:12.841307 containerd[1612]: time="2025-09-04T04:26:12.841196594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 4 04:26:12.842365 containerd[1612]: time="2025-09-04T04:26:12.842321432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 04:26:14.441451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 04:26:14.443937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:14.871115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:14.885728 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:26:15.009314 kubelet[2118]: E0904 04:26:15.009219 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:26:15.016685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:26:15.017345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:26:15.017902 systemd[1]: kubelet.service: Consumed 263ms CPU time, 110.1M memory peak. Sep 4 04:26:15.029412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464631704.mount: Deactivated successfully. Sep 4 04:26:16.564712 containerd[1612]: time="2025-09-04T04:26:16.564625503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:16.566471 containerd[1612]: time="2025-09-04T04:26:16.566413067Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 04:26:16.568397 containerd[1612]: time="2025-09-04T04:26:16.568352594Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:16.574146 containerd[1612]: time="2025-09-04T04:26:16.574080732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:16.575454 containerd[1612]: time="2025-09-04T04:26:16.575391521Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.733023632s" Sep 4 04:26:16.575529 containerd[1612]: time="2025-09-04T04:26:16.575456959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 04:26:16.576234 containerd[1612]: time="2025-09-04T04:26:16.576184395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 04:26:17.071809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473867534.mount: Deactivated successfully. Sep 4 04:26:17.079239 containerd[1612]: time="2025-09-04T04:26:17.079143120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:26:17.080083 containerd[1612]: time="2025-09-04T04:26:17.080040963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 04:26:17.081420 containerd[1612]: time="2025-09-04T04:26:17.081380354Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:26:17.083877 containerd[1612]: time="2025-09-04T04:26:17.083834792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:26:17.084533 containerd[1612]: time="2025-09-04T04:26:17.084482419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.266244ms" Sep 4 04:26:17.084533 containerd[1612]: time="2025-09-04T04:26:17.084521864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 04:26:17.085114 containerd[1612]: time="2025-09-04T04:26:17.085090581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 04:26:17.691158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048820822.mount: Deactivated successfully. Sep 4 04:26:20.216552 containerd[1612]: time="2025-09-04T04:26:20.216462347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:20.217421 containerd[1612]: time="2025-09-04T04:26:20.217354372Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 4 04:26:20.218735 containerd[1612]: time="2025-09-04T04:26:20.218676795Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:20.223328 containerd[1612]: time="2025-09-04T04:26:20.223214543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:20.224654 containerd[1612]: time="2025-09-04T04:26:20.224598355Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.139481291s" Sep 4 04:26:20.224654 containerd[1612]: time="2025-09-04T04:26:20.224643685Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 4 04:26:22.820960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:22.821132 systemd[1]: kubelet.service: Consumed 263ms CPU time, 110.1M memory peak. Sep 4 04:26:22.823707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:22.852013 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-7.scope)... Sep 4 04:26:22.852037 systemd[1]: Reloading... Sep 4 04:26:23.001373 zram_generator::config[2317]: No configuration found. Sep 4 04:26:23.700654 systemd[1]: Reloading finished in 848 ms. Sep 4 04:26:23.773052 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 04:26:23.773159 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 04:26:23.773541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:23.773598 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.3M memory peak. Sep 4 04:26:23.775326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:23.955728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:23.973834 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 04:26:24.031439 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:24.031439 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 04:26:24.031439 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:24.031856 kubelet[2359]: I0904 04:26:24.031482 2359 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 04:26:24.239481 kubelet[2359]: I0904 04:26:24.239298 2359 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 04:26:24.239481 kubelet[2359]: I0904 04:26:24.239342 2359 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 04:26:24.239697 kubelet[2359]: I0904 04:26:24.239668 2359 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 04:26:24.398486 kubelet[2359]: E0904 04:26:24.398386 2359 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:24.404124 kubelet[2359]: I0904 04:26:24.404085 2359 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 04:26:24.510479 kubelet[2359]: I0904 04:26:24.510258 2359 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 04:26:24.520013 kubelet[2359]: I0904 04:26:24.519941 2359 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 04:26:24.524788 kubelet[2359]: I0904 04:26:24.524708 2359 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 04:26:24.525017 kubelet[2359]: I0904 04:26:24.524961 2359 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 04:26:24.525313 kubelet[2359]: I0904 04:26:24.525002 2359 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 04:26:24.525313 kubelet[2359]: I0904 04:26:24.525294 2359 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 04:26:24.525313 kubelet[2359]: I0904 04:26:24.525308 2359 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 04:26:24.525591 kubelet[2359]: I0904 04:26:24.525505 2359 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:24.530702 kubelet[2359]: I0904 04:26:24.530618 2359 kubelet.go:408] "Attempting to sync node with API server" Sep 4 04:26:24.530702 kubelet[2359]: I0904 04:26:24.530660 2359 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 04:26:24.530702 kubelet[2359]: I0904 04:26:24.530731 2359 kubelet.go:314] "Adding apiserver pod source" Sep 4 04:26:24.531094 kubelet[2359]: I0904 04:26:24.530886 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 04:26:24.534535 kubelet[2359]: W0904 04:26:24.534462 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:24.534596 kubelet[2359]: E0904 04:26:24.534567 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:24.538425 kubelet[2359]: I0904 04:26:24.538367 2359 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 04:26:24.538776 kubelet[2359]: W0904 04:26:24.538707 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:24.538850 kubelet[2359]: E0904 04:26:24.538786 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:24.538999 kubelet[2359]: I0904 04:26:24.538969 2359 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 04:26:24.539976 kubelet[2359]: W0904 04:26:24.539939 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 04:26:24.543564 kubelet[2359]: I0904 04:26:24.543501 2359 server.go:1274] "Started kubelet" Sep 4 04:26:24.549484 kubelet[2359]: I0904 04:26:24.549383 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 04:26:24.552224 kubelet[2359]: I0904 04:26:24.550543 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 04:26:24.552224 kubelet[2359]: I0904 04:26:24.551397 2359 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 04:26:24.552224 kubelet[2359]: I0904 04:26:24.551512 2359 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 04:26:24.552820 kubelet[2359]: I0904 04:26:24.552788 2359 server.go:449] "Adding debug handlers to kubelet server" Sep 4 04:26:24.554341 kubelet[2359]: I0904 04:26:24.554273 2359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 04:26:24.554819 kubelet[2359]: E0904 04:26:24.552918 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861f9cc13496400 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 04:26:24.543425536 +0000 UTC m=+0.563194315,LastTimestamp:2025-09-04 04:26:24.543425536 +0000 UTC m=+0.563194315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 04:26:24.555956 kubelet[2359]: I0904 04:26:24.555843 2359 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 04:26:24.556441 kubelet[2359]: E0904 04:26:24.556418 2359 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:24.557447 kubelet[2359]: I0904 04:26:24.557074 2359 factory.go:221] Registration of the systemd container factory successfully Sep 4 04:26:24.557569 kubelet[2359]: I0904 04:26:24.557496 2359 reconciler.go:26] "Reconciler: start to sync state" Sep 4 04:26:24.557569 kubelet[2359]: I0904 04:26:24.557417 2359 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 04:26:24.557569 kubelet[2359]: I0904 04:26:24.557546 2359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 04:26:24.557967 kubelet[2359]: W0904 04:26:24.557904 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:24.558019 kubelet[2359]: E0904 04:26:24.557974 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:24.558112 kubelet[2359]: E0904 04:26:24.557184 2359 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 04:26:24.558799 kubelet[2359]: E0904 04:26:24.558739 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Sep 4 04:26:24.562799 kubelet[2359]: I0904 04:26:24.562767 2359 factory.go:221] Registration of the containerd container factory successfully Sep 4 04:26:24.583123 kubelet[2359]: I0904 04:26:24.583012 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 04:26:24.585736 kubelet[2359]: I0904 04:26:24.585677 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 04:26:24.585863 kubelet[2359]: I0904 04:26:24.585846 2359 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 04:26:24.585976 kubelet[2359]: I0904 04:26:24.585961 2359 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 04:26:24.586147 kubelet[2359]: E0904 04:26:24.586119 2359 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 04:26:24.586858 kubelet[2359]: W0904 04:26:24.586826 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:24.586973 kubelet[2359]: E0904 04:26:24.586949 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:24.587058 kubelet[2359]: I0904 04:26:24.587023 2359 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 04:26:24.587148 kubelet[2359]: I0904 04:26:24.587134 2359 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 04:26:24.587232 kubelet[2359]: I0904 04:26:24.587219 2359 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:24.658325 kubelet[2359]: E0904 04:26:24.658221 2359 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:24.686742 kubelet[2359]: E0904 04:26:24.686668 2359 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 04:26:24.759218 kubelet[2359]: E0904 04:26:24.759148 2359 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:24.759769 kubelet[2359]: E0904 04:26:24.759711 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Sep 4 04:26:24.860499 kubelet[2359]: E0904 04:26:24.860266 2359 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:24.887621 kubelet[2359]: E0904 04:26:24.887519 2359 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 04:26:24.924469 kubelet[2359]: I0904 04:26:24.924394 2359 policy_none.go:49] "None policy: Start" Sep 4 04:26:24.925816 kubelet[2359]: I0904 04:26:24.925775 2359 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 04:26:24.925816 kubelet[2359]: I0904 04:26:24.925811 2359 state_mem.go:35] "Initializing new in-memory state store" Sep 4 04:26:24.934695 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 04:26:24.948819 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 04:26:24.953977 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 04:26:24.961474 kubelet[2359]: E0904 04:26:24.961402 2359 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:24.968145 kubelet[2359]: I0904 04:26:24.968081 2359 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 04:26:24.968535 kubelet[2359]: I0904 04:26:24.968507 2359 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 04:26:24.968620 kubelet[2359]: I0904 04:26:24.968534 2359 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 04:26:24.968913 kubelet[2359]: I0904 04:26:24.968874 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 04:26:24.970541 kubelet[2359]: E0904 04:26:24.970483 2359 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 04:26:25.071654 kubelet[2359]: I0904 04:26:25.071568 2359 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:26:25.072164 kubelet[2359]: E0904 04:26:25.072124 2359 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Sep 4 04:26:25.161263 kubelet[2359]: E0904 04:26:25.161060 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Sep 4 04:26:25.274203 kubelet[2359]: I0904 04:26:25.274139 2359 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:26:25.274701 kubelet[2359]: E0904 04:26:25.274649 2359 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Sep 4 04:26:25.298881 systemd[1]: Created slice kubepods-burstable-pod268e95267b28ba3fd4c7f66885ad4555.slice - libcontainer container kubepods-burstable-pod268e95267b28ba3fd4c7f66885ad4555.slice. Sep 4 04:26:25.322126 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 4 04:26:25.337294 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 4 04:26:25.362380 kubelet[2359]: I0904 04:26:25.362316 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/268e95267b28ba3fd4c7f66885ad4555-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"268e95267b28ba3fd4c7f66885ad4555\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:25.362380 kubelet[2359]: I0904 04:26:25.362370 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/268e95267b28ba3fd4c7f66885ad4555-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"268e95267b28ba3fd4c7f66885ad4555\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:25.362380 kubelet[2359]: I0904 04:26:25.362396 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:25.362594 kubelet[2359]: I0904 04:26:25.362415 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:25.362594 kubelet[2359]: I0904 04:26:25.362498 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:25.362594 kubelet[2359]: I0904 04:26:25.362562 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:25.362668 kubelet[2359]: I0904 04:26:25.362611 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:25.362668 kubelet[2359]: I0904 04:26:25.362628 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/268e95267b28ba3fd4c7f66885ad4555-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"268e95267b28ba3fd4c7f66885ad4555\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:25.362668 kubelet[2359]: I0904 04:26:25.362643 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:25.503772 kubelet[2359]: W0904 04:26:25.503653 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:25.503772 kubelet[2359]: E0904 04:26:25.503732 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:25.544329 kubelet[2359]: W0904 04:26:25.544220 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:25.544497 kubelet[2359]: E0904 04:26:25.544346 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:25.619924 kubelet[2359]: E0904 04:26:25.619852 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:25.620813 containerd[1612]: time="2025-09-04T04:26:25.620757779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:268e95267b28ba3fd4c7f66885ad4555,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:25.635207 kubelet[2359]: E0904 04:26:25.635147 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:25.635882 containerd[1612]: time="2025-09-04T04:26:25.635810510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:25.641253 kubelet[2359]: E0904 04:26:25.641218 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:25.641685 containerd[1612]: time="2025-09-04T04:26:25.641647517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:25.677008 kubelet[2359]: I0904 04:26:25.676961 2359 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:26:25.677512 kubelet[2359]: E0904 04:26:25.677460 2359 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Sep 4 04:26:25.935631 kubelet[2359]: W0904 04:26:25.935427 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:25.935631 kubelet[2359]: E0904 04:26:25.935515 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:25.962662 kubelet[2359]: E0904 04:26:25.962586 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Sep 4 04:26:26.159568 kubelet[2359]: W0904 04:26:26.159478 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Sep 4 04:26:26.159568 kubelet[2359]: E0904 04:26:26.159540 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:26.403310 containerd[1612]: time="2025-09-04T04:26:26.403230716Z" level=info msg="connecting to shim c2f62515cdbd768d5ecfc04d2530c4d46511f57fcfdcddce60465ecf308bf3d0" address="unix:///run/containerd/s/2f7a5fbb3afb4e494af895d0c24c90714f7337fb6859d12e8ed5d0991b2ed808" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:26.408977 containerd[1612]: time="2025-09-04T04:26:26.408856158Z" level=info msg="connecting to shim f89d007064d36616bc7d19abf777da073cba15e5ba5c1d8f7bddc7bd0a41803e" address="unix:///run/containerd/s/932a43e8c408a5e7bb5962774dccfe11cecd77d129e90a7ca376a799f0d0b198" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:26.410700 containerd[1612]: time="2025-09-04T04:26:26.410595608Z" level=info msg="connecting to shim 83ee366e132e0be752a5b8525e6087e8867927c6b8fadc68a4bcafb44486475f" address="unix:///run/containerd/s/fa5a23397fd0f6f791747b1788a9c54e7306a07cefcfc08b8bde60d69e59c56b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:26.451501 systemd[1]: Started cri-containerd-c2f62515cdbd768d5ecfc04d2530c4d46511f57fcfdcddce60465ecf308bf3d0.scope - libcontainer container c2f62515cdbd768d5ecfc04d2530c4d46511f57fcfdcddce60465ecf308bf3d0. Sep 4 04:26:26.461878 systemd[1]: Started cri-containerd-83ee366e132e0be752a5b8525e6087e8867927c6b8fadc68a4bcafb44486475f.scope - libcontainer container 83ee366e132e0be752a5b8525e6087e8867927c6b8fadc68a4bcafb44486475f. Sep 4 04:26:26.472558 systemd[1]: Started cri-containerd-f89d007064d36616bc7d19abf777da073cba15e5ba5c1d8f7bddc7bd0a41803e.scope - libcontainer container f89d007064d36616bc7d19abf777da073cba15e5ba5c1d8f7bddc7bd0a41803e. Sep 4 04:26:26.482029 kubelet[2359]: E0904 04:26:26.481395 2359 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Sep 4 04:26:26.482837 kubelet[2359]: I0904 04:26:26.482795 2359 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:26:26.485597 kubelet[2359]: E0904 04:26:26.485538 2359 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Sep 4 04:26:26.534928 containerd[1612]: time="2025-09-04T04:26:26.534798315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2f62515cdbd768d5ecfc04d2530c4d46511f57fcfdcddce60465ecf308bf3d0\"" Sep 4 04:26:26.537543 kubelet[2359]: E0904 04:26:26.537310 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:26.543912 containerd[1612]: time="2025-09-04T04:26:26.543817938Z" level=info msg="CreateContainer within sandbox \"c2f62515cdbd768d5ecfc04d2530c4d46511f57fcfdcddce60465ecf308bf3d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 04:26:26.547153 containerd[1612]: time="2025-09-04T04:26:26.547013513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"83ee366e132e0be752a5b8525e6087e8867927c6b8fadc68a4bcafb44486475f\"" Sep 4 04:26:26.550191 kubelet[2359]: E0904 04:26:26.549976 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:26.553335 containerd[1612]: time="2025-09-04T04:26:26.552400436Z" level=info msg="CreateContainer within sandbox \"83ee366e132e0be752a5b8525e6087e8867927c6b8fadc68a4bcafb44486475f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 04:26:26.567480 containerd[1612]: time="2025-09-04T04:26:26.567425457Z" level=info msg="Container f7571bc257071811064fb2226db91337b9a17d6162c96d23e982e9812209232a: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:26.580410 containerd[1612]: time="2025-09-04T04:26:26.580347519Z" level=info msg="Container 5736b04505eca0b10ea02194b490341128036030a05ef3bc7e8a3d65c698212b: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:26.586729 containerd[1612]: time="2025-09-04T04:26:26.586549671Z" level=info msg="CreateContainer within sandbox \"c2f62515cdbd768d5ecfc04d2530c4d46511f57fcfdcddce60465ecf308bf3d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f7571bc257071811064fb2226db91337b9a17d6162c96d23e982e9812209232a\"" Sep 4 04:26:26.586729 containerd[1612]: time="2025-09-04T04:26:26.586716976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:268e95267b28ba3fd4c7f66885ad4555,Namespace:kube-system,Attempt:0,} returns sandbox id \"f89d007064d36616bc7d19abf777da073cba15e5ba5c1d8f7bddc7bd0a41803e\"" Sep 4 04:26:26.587607 containerd[1612]: time="2025-09-04T04:26:26.587563186Z" level=info msg="StartContainer for \"f7571bc257071811064fb2226db91337b9a17d6162c96d23e982e9812209232a\"" Sep 4 04:26:26.587922 kubelet[2359]: E0904 04:26:26.587893 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:26.589447 containerd[1612]: time="2025-09-04T04:26:26.589406553Z" level=info msg="connecting to shim f7571bc257071811064fb2226db91337b9a17d6162c96d23e982e9812209232a" address="unix:///run/containerd/s/2f7a5fbb3afb4e494af895d0c24c90714f7337fb6859d12e8ed5d0991b2ed808" protocol=ttrpc version=3 Sep 4 04:26:26.590319 containerd[1612]: time="2025-09-04T04:26:26.590242332Z" level=info msg="CreateContainer within sandbox \"f89d007064d36616bc7d19abf777da073cba15e5ba5c1d8f7bddc7bd0a41803e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 04:26:26.603625 containerd[1612]: time="2025-09-04T04:26:26.603567545Z" level=info msg="CreateContainer within sandbox \"83ee366e132e0be752a5b8525e6087e8867927c6b8fadc68a4bcafb44486475f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5736b04505eca0b10ea02194b490341128036030a05ef3bc7e8a3d65c698212b\"" Sep 4 04:26:26.604039 containerd[1612]: time="2025-09-04T04:26:26.603992977Z" level=info msg="StartContainer for \"5736b04505eca0b10ea02194b490341128036030a05ef3bc7e8a3d65c698212b\"" Sep 4 04:26:26.605272 containerd[1612]: time="2025-09-04T04:26:26.605234549Z" level=info msg="connecting to shim 5736b04505eca0b10ea02194b490341128036030a05ef3bc7e8a3d65c698212b" address="unix:///run/containerd/s/fa5a23397fd0f6f791747b1788a9c54e7306a07cefcfc08b8bde60d69e59c56b" protocol=ttrpc version=3 Sep 4 04:26:26.611338 containerd[1612]: time="2025-09-04T04:26:26.611118209Z" level=info msg="Container 0e88a78a5784f037220bef8923ad744000fc4ef5300ef028eddedfe824fbae21: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:26.647553 systemd[1]: Started cri-containerd-f7571bc257071811064fb2226db91337b9a17d6162c96d23e982e9812209232a.scope - libcontainer container f7571bc257071811064fb2226db91337b9a17d6162c96d23e982e9812209232a. Sep 4 04:26:26.653092 containerd[1612]: time="2025-09-04T04:26:26.653036397Z" level=info msg="CreateContainer within sandbox \"f89d007064d36616bc7d19abf777da073cba15e5ba5c1d8f7bddc7bd0a41803e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0e88a78a5784f037220bef8923ad744000fc4ef5300ef028eddedfe824fbae21\"" Sep 4 04:26:26.654384 containerd[1612]: time="2025-09-04T04:26:26.654182099Z" level=info msg="StartContainer for \"0e88a78a5784f037220bef8923ad744000fc4ef5300ef028eddedfe824fbae21\"" Sep 4 04:26:26.655976 containerd[1612]: time="2025-09-04T04:26:26.655934962Z" level=info msg="connecting to shim 0e88a78a5784f037220bef8923ad744000fc4ef5300ef028eddedfe824fbae21" address="unix:///run/containerd/s/932a43e8c408a5e7bb5962774dccfe11cecd77d129e90a7ca376a799f0d0b198" protocol=ttrpc version=3 Sep 4 04:26:26.658528 systemd[1]: Started cri-containerd-5736b04505eca0b10ea02194b490341128036030a05ef3bc7e8a3d65c698212b.scope - libcontainer container 5736b04505eca0b10ea02194b490341128036030a05ef3bc7e8a3d65c698212b. Sep 4 04:26:26.720760 systemd[1]: Started cri-containerd-0e88a78a5784f037220bef8923ad744000fc4ef5300ef028eddedfe824fbae21.scope - libcontainer container 0e88a78a5784f037220bef8923ad744000fc4ef5300ef028eddedfe824fbae21. Sep 4 04:26:26.780405 containerd[1612]: time="2025-09-04T04:26:26.780338648Z" level=info msg="StartContainer for \"f7571bc257071811064fb2226db91337b9a17d6162c96d23e982e9812209232a\" returns successfully" Sep 4 04:26:26.805406 containerd[1612]: time="2025-09-04T04:26:26.805329586Z" level=info msg="StartContainer for \"5736b04505eca0b10ea02194b490341128036030a05ef3bc7e8a3d65c698212b\" returns successfully" Sep 4 04:26:26.849763 containerd[1612]: time="2025-09-04T04:26:26.849694499Z" level=info msg="StartContainer for \"0e88a78a5784f037220bef8923ad744000fc4ef5300ef028eddedfe824fbae21\" returns successfully" Sep 4 04:26:27.603023 kubelet[2359]: E0904 04:26:27.602977 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:27.605778 kubelet[2359]: E0904 04:26:27.605756 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:27.607646 kubelet[2359]: E0904 04:26:27.607612 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:28.087699 kubelet[2359]: I0904 04:26:28.087650 2359 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:26:28.445555 kubelet[2359]: E0904 04:26:28.445466 2359 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 04:26:28.520557 kubelet[2359]: I0904 04:26:28.520148 2359 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 04:26:28.536274 kubelet[2359]: I0904 04:26:28.536231 2359 apiserver.go:52] "Watching apiserver" Sep 4 04:26:28.557918 kubelet[2359]: I0904 04:26:28.557851 2359 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 04:26:28.621438 kubelet[2359]: E0904 04:26:28.621371 2359 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:28.622136 kubelet[2359]: E0904 04:26:28.621596 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:28.622136 kubelet[2359]: E0904 04:26:28.621384 2359 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:28.622136 kubelet[2359]: E0904 04:26:28.621892 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:28.622136 kubelet[2359]: E0904 04:26:28.621978 2359 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:28.622136 kubelet[2359]: E0904 04:26:28.622111 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:29.618761 kubelet[2359]: E0904 04:26:29.618717 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:30.614603 kubelet[2359]: E0904 04:26:30.614565 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:30.986226 systemd[1]: Reload requested from client PID 2640 ('systemctl') (unit session-7.scope)... Sep 4 04:26:30.986244 systemd[1]: Reloading... Sep 4 04:26:31.105345 zram_generator::config[2686]: No configuration found. Sep 4 04:26:31.372191 systemd[1]: Reloading finished in 385 ms. Sep 4 04:26:31.413655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:31.430010 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 04:26:31.430459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:31.430530 systemd[1]: kubelet.service: Consumed 967ms CPU time, 133.4M memory peak. Sep 4 04:26:31.433023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:31.717664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:31.729002 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 04:26:31.839767 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:31.839767 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 04:26:31.839767 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:31.840674 kubelet[2728]: I0904 04:26:31.840005 2728 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 04:26:31.858625 kubelet[2728]: I0904 04:26:31.858485 2728 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 04:26:31.858625 kubelet[2728]: I0904 04:26:31.858575 2728 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 04:26:31.859247 kubelet[2728]: I0904 04:26:31.859205 2728 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 04:26:31.861894 kubelet[2728]: I0904 04:26:31.861803 2728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 04:26:31.870026 kubelet[2728]: I0904 04:26:31.869895 2728 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 04:26:31.887409 kubelet[2728]: I0904 04:26:31.887345 2728 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 04:26:31.896761 kubelet[2728]: I0904 04:26:31.896692 2728 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 04:26:31.896948 kubelet[2728]: I0904 04:26:31.896899 2728 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 04:26:31.897377 kubelet[2728]: I0904 04:26:31.897255 2728 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 04:26:31.897867 kubelet[2728]: I0904 04:26:31.897356 2728 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 04:26:31.898107 kubelet[2728]: I0904 04:26:31.897887 2728 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 04:26:31.898107 kubelet[2728]: I0904 04:26:31.897930 2728 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 04:26:31.898244 kubelet[2728]: I0904 04:26:31.898161 2728 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:31.898641 kubelet[2728]: I0904 04:26:31.898592 2728 kubelet.go:408] "Attempting to sync node with API server" Sep 4 04:26:31.898641 kubelet[2728]: I0904 04:26:31.898640 2728 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 04:26:31.898731 kubelet[2728]: I0904 04:26:31.898699 2728 kubelet.go:314] "Adding apiserver pod source" Sep 4 04:26:31.898731 kubelet[2728]: I0904 04:26:31.898720 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 04:26:31.901028 kubelet[2728]: I0904 04:26:31.900804 2728 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 04:26:31.903130 kubelet[2728]: I0904 04:26:31.902618 2728 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 04:26:31.906366 kubelet[2728]: I0904 04:26:31.905434 2728 server.go:1274] "Started kubelet" Sep 4 04:26:31.908633 kubelet[2728]: I0904 04:26:31.908518 2728 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 04:26:31.909523 kubelet[2728]: I0904 04:26:31.907460 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 04:26:31.911798 kubelet[2728]: I0904 04:26:31.911680 2728 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 04:26:31.912363 kubelet[2728]: I0904 04:26:31.912340 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 04:26:31.913515 kubelet[2728]: I0904 04:26:31.913464 2728 server.go:449] "Adding debug handlers to kubelet server" Sep 4 04:26:31.917019 kubelet[2728]: I0904 04:26:31.916962 2728 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 04:26:31.920854 kubelet[2728]: I0904 04:26:31.920792 2728 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 04:26:31.923148 kubelet[2728]: E0904 04:26:31.923103 2728 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:31.924113 kubelet[2728]: I0904 04:26:31.924080 2728 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 04:26:31.927727 kubelet[2728]: I0904 04:26:31.927684 2728 reconciler.go:26] "Reconciler: start to sync state" Sep 4 04:26:31.928010 kubelet[2728]: I0904 04:26:31.927954 2728 factory.go:221] Registration of the systemd container factory successfully Sep 4 04:26:31.929247 kubelet[2728]: I0904 04:26:31.929163 2728 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 04:26:31.935910 kubelet[2728]: E0904 04:26:31.935841 2728 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 04:26:31.936033 kubelet[2728]: I0904 04:26:31.935856 2728 factory.go:221] Registration of the containerd container factory successfully Sep 4 04:26:31.963777 kubelet[2728]: I0904 04:26:31.963714 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 04:26:31.972468 kubelet[2728]: I0904 04:26:31.970836 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 04:26:31.972468 kubelet[2728]: I0904 04:26:31.971271 2728 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 04:26:31.972468 kubelet[2728]: I0904 04:26:31.971648 2728 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 04:26:31.972468 kubelet[2728]: E0904 04:26:31.971778 2728 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 04:26:32.072557 kubelet[2728]: E0904 04:26:32.072260 2728 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 04:26:32.084405 kubelet[2728]: I0904 04:26:32.083645 2728 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 04:26:32.084405 kubelet[2728]: I0904 04:26:32.084385 2728 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 04:26:32.084571 kubelet[2728]: I0904 04:26:32.084495 2728 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:32.085057 kubelet[2728]: I0904 04:26:32.085006 2728 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 04:26:32.085109 kubelet[2728]: I0904 04:26:32.085041 2728 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 04:26:32.085109 kubelet[2728]: I0904 04:26:32.085079 2728 policy_none.go:49] "None policy: Start" Sep 4 04:26:32.087931 kubelet[2728]: I0904 04:26:32.087848 2728 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 04:26:32.088169 kubelet[2728]: I0904 04:26:32.088129 2728 state_mem.go:35] "Initializing new in-memory state store" Sep 4 04:26:32.088760 kubelet[2728]: I0904 04:26:32.088710 2728 state_mem.go:75] "Updated machine memory state" Sep 4 04:26:32.110318 kubelet[2728]: I0904 04:26:32.109332 2728 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 04:26:32.110318 kubelet[2728]: I0904 04:26:32.110109 2728 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 04:26:32.110318 kubelet[2728]: I0904 04:26:32.110145 2728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 04:26:32.111386 kubelet[2728]: I0904 04:26:32.111310 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 04:26:32.231050 kubelet[2728]: I0904 04:26:32.230906 2728 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 04:26:32.241658 kubelet[2728]: I0904 04:26:32.241570 2728 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 4 04:26:32.241969 kubelet[2728]: I0904 04:26:32.241941 2728 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 04:26:32.283122 kubelet[2728]: E0904 04:26:32.283069 2728 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:32.330766 kubelet[2728]: I0904 04:26:32.330671 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:32.330766 kubelet[2728]: I0904 04:26:32.330733 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/268e95267b28ba3fd4c7f66885ad4555-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"268e95267b28ba3fd4c7f66885ad4555\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:32.330766 kubelet[2728]: I0904 04:26:32.330763 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/268e95267b28ba3fd4c7f66885ad4555-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"268e95267b28ba3fd4c7f66885ad4555\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:32.330766 kubelet[2728]: I0904 04:26:32.330798 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:32.330766 kubelet[2728]: I0904 04:26:32.330823 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:32.331222 kubelet[2728]: I0904 04:26:32.330842 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/268e95267b28ba3fd4c7f66885ad4555-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"268e95267b28ba3fd4c7f66885ad4555\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:32.331222 kubelet[2728]: I0904 04:26:32.330893 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:32.331222 kubelet[2728]: I0904 04:26:32.330940 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:32.331222 kubelet[2728]: I0904 04:26:32.330969 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:32.582666 kubelet[2728]: E0904 04:26:32.582482 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:32.582666 kubelet[2728]: E0904 04:26:32.582642 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:32.583485 kubelet[2728]: E0904 04:26:32.583411 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:32.901010 kubelet[2728]: I0904 04:26:32.900453 2728 apiserver.go:52] "Watching apiserver" Sep 4 04:26:32.924750 kubelet[2728]: I0904 04:26:32.924698 2728 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 04:26:33.032155 kubelet[2728]: E0904 04:26:33.032109 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:33.033025 kubelet[2728]: E0904 04:26:33.032987 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:33.034536 kubelet[2728]: E0904 04:26:33.034484 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:33.058435 kubelet[2728]: I0904 04:26:33.058317 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.058269687 podStartE2EDuration="1.058269687s" podCreationTimestamp="2025-09-04 04:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:33.05825741 +0000 UTC m=+1.323167054" watchObservedRunningTime="2025-09-04 04:26:33.058269687 +0000 UTC m=+1.323179331" Sep 4 04:26:33.066415 kubelet[2728]: I0904 04:26:33.066254 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.066237614 podStartE2EDuration="4.066237614s" podCreationTimestamp="2025-09-04 04:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:33.065766661 +0000 UTC m=+1.330676295" watchObservedRunningTime="2025-09-04 04:26:33.066237614 +0000 UTC m=+1.331147248" Sep 4 04:26:33.072797 kubelet[2728]: I0904 04:26:33.072725 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.072689965 podStartE2EDuration="1.072689965s" podCreationTimestamp="2025-09-04 04:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:33.072356107 +0000 UTC m=+1.337265741" watchObservedRunningTime="2025-09-04 04:26:33.072689965 +0000 UTC m=+1.337599609" Sep 4 04:26:34.003404 update_engine[1511]: I20250904 04:26:34.003331 1511 update_attempter.cc:509] Updating boot flags... Sep 4 04:26:34.033002 kubelet[2728]: E0904 04:26:34.032955 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:35.550129 kubelet[2728]: E0904 04:26:35.550048 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:37.630082 kubelet[2728]: I0904 04:26:37.630028 2728 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 04:26:37.630596 containerd[1612]: time="2025-09-04T04:26:37.630532504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 04:26:37.631116 kubelet[2728]: I0904 04:26:37.630979 2728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 04:26:37.936588 systemd[1]: Created slice kubepods-besteffort-pod3ce9c705_1e67_4f6c_95a8_c9941e1db4ef.slice - libcontainer container kubepods-besteffort-pod3ce9c705_1e67_4f6c_95a8_c9941e1db4ef.slice. Sep 4 04:26:38.031878 kubelet[2728]: I0904 04:26:38.031812 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ce9c705-1e67-4f6c-95a8-c9941e1db4ef-kube-proxy\") pod \"kube-proxy-k9tcj\" (UID: \"3ce9c705-1e67-4f6c-95a8-c9941e1db4ef\") " pod="kube-system/kube-proxy-k9tcj" Sep 4 04:26:38.031878 kubelet[2728]: I0904 04:26:38.031875 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lb8g\" (UniqueName: \"kubernetes.io/projected/3ce9c705-1e67-4f6c-95a8-c9941e1db4ef-kube-api-access-6lb8g\") pod \"kube-proxy-k9tcj\" (UID: \"3ce9c705-1e67-4f6c-95a8-c9941e1db4ef\") " pod="kube-system/kube-proxy-k9tcj" Sep 4 04:26:38.032133 kubelet[2728]: I0904 04:26:38.031901 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ce9c705-1e67-4f6c-95a8-c9941e1db4ef-xtables-lock\") pod \"kube-proxy-k9tcj\" (UID: \"3ce9c705-1e67-4f6c-95a8-c9941e1db4ef\") " pod="kube-system/kube-proxy-k9tcj" Sep 4 04:26:38.032133 kubelet[2728]: I0904 04:26:38.031918 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ce9c705-1e67-4f6c-95a8-c9941e1db4ef-lib-modules\") pod \"kube-proxy-k9tcj\" (UID: \"3ce9c705-1e67-4f6c-95a8-c9941e1db4ef\") " pod="kube-system/kube-proxy-k9tcj" Sep 4 04:26:38.246404 kubelet[2728]: E0904 04:26:38.246345 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:38.247085 containerd[1612]: time="2025-09-04T04:26:38.247024807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9tcj,Uid:3ce9c705-1e67-4f6c-95a8-c9941e1db4ef,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:38.462501 containerd[1612]: time="2025-09-04T04:26:38.462419746Z" level=info msg="connecting to shim 8fb39650a30a015478508f913ede028620f458e4e2fbe4204593fc01762994d2" address="unix:///run/containerd/s/eb47a8fcfc545c99d1a56902ca24bac4ec7a17e12da5567284d6a7aba3fc8518" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:38.497562 systemd[1]: Created slice kubepods-besteffort-pod3a5a952e_c85b_4f17_8996_ab4a1a0ac805.slice - libcontainer container kubepods-besteffort-pod3a5a952e_c85b_4f17_8996_ab4a1a0ac805.slice. Sep 4 04:26:38.524451 systemd[1]: Started cri-containerd-8fb39650a30a015478508f913ede028620f458e4e2fbe4204593fc01762994d2.scope - libcontainer container 8fb39650a30a015478508f913ede028620f458e4e2fbe4204593fc01762994d2. Sep 4 04:26:38.554068 containerd[1612]: time="2025-09-04T04:26:38.554018185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9tcj,Uid:3ce9c705-1e67-4f6c-95a8-c9941e1db4ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fb39650a30a015478508f913ede028620f458e4e2fbe4204593fc01762994d2\"" Sep 4 04:26:38.554981 kubelet[2728]: E0904 04:26:38.554950 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:38.558105 containerd[1612]: time="2025-09-04T04:26:38.558065838Z" level=info msg="CreateContainer within sandbox \"8fb39650a30a015478508f913ede028620f458e4e2fbe4204593fc01762994d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 04:26:38.572460 containerd[1612]: time="2025-09-04T04:26:38.571109898Z" level=info msg="Container 72e3e12a7ca6da502e721fedba7ead4cb1e0c45a6360403725035980e7206835: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:38.580617 containerd[1612]: time="2025-09-04T04:26:38.580571450Z" level=info msg="CreateContainer within sandbox \"8fb39650a30a015478508f913ede028620f458e4e2fbe4204593fc01762994d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72e3e12a7ca6da502e721fedba7ead4cb1e0c45a6360403725035980e7206835\"" Sep 4 04:26:38.581187 containerd[1612]: time="2025-09-04T04:26:38.581154499Z" level=info msg="StartContainer for \"72e3e12a7ca6da502e721fedba7ead4cb1e0c45a6360403725035980e7206835\"" Sep 4 04:26:38.582866 containerd[1612]: time="2025-09-04T04:26:38.582825559Z" level=info msg="connecting to shim 72e3e12a7ca6da502e721fedba7ead4cb1e0c45a6360403725035980e7206835" address="unix:///run/containerd/s/eb47a8fcfc545c99d1a56902ca24bac4ec7a17e12da5567284d6a7aba3fc8518" protocol=ttrpc version=3 Sep 4 04:26:38.608424 systemd[1]: Started cri-containerd-72e3e12a7ca6da502e721fedba7ead4cb1e0c45a6360403725035980e7206835.scope - libcontainer container 72e3e12a7ca6da502e721fedba7ead4cb1e0c45a6360403725035980e7206835. Sep 4 04:26:38.635997 kubelet[2728]: I0904 04:26:38.635939 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b725z\" (UniqueName: \"kubernetes.io/projected/3a5a952e-c85b-4f17-8996-ab4a1a0ac805-kube-api-access-b725z\") pod \"tigera-operator-58fc44c59b-2x58c\" (UID: \"3a5a952e-c85b-4f17-8996-ab4a1a0ac805\") " pod="tigera-operator/tigera-operator-58fc44c59b-2x58c" Sep 4 04:26:38.636442 kubelet[2728]: I0904 04:26:38.635987 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a5a952e-c85b-4f17-8996-ab4a1a0ac805-var-lib-calico\") pod \"tigera-operator-58fc44c59b-2x58c\" (UID: \"3a5a952e-c85b-4f17-8996-ab4a1a0ac805\") " pod="tigera-operator/tigera-operator-58fc44c59b-2x58c" Sep 4 04:26:38.656939 containerd[1612]: time="2025-09-04T04:26:38.656876005Z" level=info msg="StartContainer for \"72e3e12a7ca6da502e721fedba7ead4cb1e0c45a6360403725035980e7206835\" returns successfully" Sep 4 04:26:38.802118 containerd[1612]: time="2025-09-04T04:26:38.801979249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2x58c,Uid:3a5a952e-c85b-4f17-8996-ab4a1a0ac805,Namespace:tigera-operator,Attempt:0,}" Sep 4 04:26:38.829982 containerd[1612]: time="2025-09-04T04:26:38.829913062Z" level=info msg="connecting to shim cd24c1131663c11b4be2a52e04190f1487f1d11552a93172f06a3083ce01d928" address="unix:///run/containerd/s/8331825100e41e577861fdea5497efa2e78d8b46767401f80be84813028c76b4" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:38.858420 systemd[1]: Started cri-containerd-cd24c1131663c11b4be2a52e04190f1487f1d11552a93172f06a3083ce01d928.scope - libcontainer container cd24c1131663c11b4be2a52e04190f1487f1d11552a93172f06a3083ce01d928. Sep 4 04:26:38.905104 containerd[1612]: time="2025-09-04T04:26:38.905056667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2x58c,Uid:3a5a952e-c85b-4f17-8996-ab4a1a0ac805,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cd24c1131663c11b4be2a52e04190f1487f1d11552a93172f06a3083ce01d928\"" Sep 4 04:26:38.907438 containerd[1612]: time="2025-09-04T04:26:38.907412669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 04:26:39.046928 kubelet[2728]: E0904 04:26:39.046650 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:39.304597 kubelet[2728]: E0904 04:26:39.304547 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:39.549857 kubelet[2728]: E0904 04:26:39.549806 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:39.587448 kubelet[2728]: I0904 04:26:39.587203 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9tcj" podStartSLOduration=2.587177316 podStartE2EDuration="2.587177316s" podCreationTimestamp="2025-09-04 04:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:39.056931721 +0000 UTC m=+7.321841365" watchObservedRunningTime="2025-09-04 04:26:39.587177316 +0000 UTC m=+7.852086970" Sep 4 04:26:40.048219 kubelet[2728]: E0904 04:26:40.048174 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:40.048798 kubelet[2728]: E0904 04:26:40.048395 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:41.185012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781423910.mount: Deactivated successfully. Sep 4 04:26:41.837169 containerd[1612]: time="2025-09-04T04:26:41.837060971Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:41.838051 containerd[1612]: time="2025-09-04T04:26:41.837980023Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 04:26:41.839639 containerd[1612]: time="2025-09-04T04:26:41.839586927Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:41.842065 containerd[1612]: time="2025-09-04T04:26:41.842021841Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:41.842748 containerd[1612]: time="2025-09-04T04:26:41.842693748Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.935248039s" Sep 4 04:26:41.842748 containerd[1612]: time="2025-09-04T04:26:41.842723203Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 04:26:41.844734 containerd[1612]: time="2025-09-04T04:26:41.844704320Z" level=info msg="CreateContainer within sandbox \"cd24c1131663c11b4be2a52e04190f1487f1d11552a93172f06a3083ce01d928\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 04:26:41.852518 containerd[1612]: time="2025-09-04T04:26:41.852447365Z" level=info msg="Container 937076d83057cbab739f9a63950588a40b7be6febfb5e66907773c0cae5bddbe: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:41.862503 containerd[1612]: time="2025-09-04T04:26:41.862449490Z" level=info msg="CreateContainer within sandbox \"cd24c1131663c11b4be2a52e04190f1487f1d11552a93172f06a3083ce01d928\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"937076d83057cbab739f9a63950588a40b7be6febfb5e66907773c0cae5bddbe\"" Sep 4 04:26:41.863011 containerd[1612]: time="2025-09-04T04:26:41.862983271Z" level=info msg="StartContainer for \"937076d83057cbab739f9a63950588a40b7be6febfb5e66907773c0cae5bddbe\"" Sep 4 04:26:41.863859 containerd[1612]: time="2025-09-04T04:26:41.863831322Z" level=info msg="connecting to shim 937076d83057cbab739f9a63950588a40b7be6febfb5e66907773c0cae5bddbe" address="unix:///run/containerd/s/8331825100e41e577861fdea5497efa2e78d8b46767401f80be84813028c76b4" protocol=ttrpc version=3 Sep 4 04:26:41.920514 systemd[1]: Started cri-containerd-937076d83057cbab739f9a63950588a40b7be6febfb5e66907773c0cae5bddbe.scope - libcontainer container 937076d83057cbab739f9a63950588a40b7be6febfb5e66907773c0cae5bddbe. Sep 4 04:26:41.955936 containerd[1612]: time="2025-09-04T04:26:41.955884644Z" level=info msg="StartContainer for \"937076d83057cbab739f9a63950588a40b7be6febfb5e66907773c0cae5bddbe\" returns successfully" Sep 4 04:26:42.062484 kubelet[2728]: I0904 04:26:42.062371 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-2x58c" podStartSLOduration=1.125119676 podStartE2EDuration="4.062341911s" podCreationTimestamp="2025-09-04 04:26:38 +0000 UTC" firstStartedPulling="2025-09-04 04:26:38.90624626 +0000 UTC m=+7.171155904" lastFinishedPulling="2025-09-04 04:26:41.843468495 +0000 UTC m=+10.108378139" observedRunningTime="2025-09-04 04:26:42.061900586 +0000 UTC m=+10.326810230" watchObservedRunningTime="2025-09-04 04:26:42.062341911 +0000 UTC m=+10.327251556" Sep 4 04:26:45.555841 kubelet[2728]: E0904 04:26:45.555786 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:46.061739 kubelet[2728]: E0904 04:26:46.061691 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:47.721925 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 4 04:26:47.724766 sshd[1782]: Connection closed by 10.0.0.1 port 46132 Sep 4 04:26:47.727183 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:47.736849 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:46132.service: Deactivated successfully. Sep 4 04:26:47.744007 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 04:26:47.744774 systemd[1]: session-7.scope: Consumed 5.791s CPU time, 227M memory peak. Sep 4 04:26:47.747026 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Sep 4 04:26:47.750391 systemd-logind[1509]: Removed session 7. Sep 4 04:26:51.409073 systemd[1]: Created slice kubepods-besteffort-pod372f205a_3efa_4750_8131_59a24016a6d4.slice - libcontainer container kubepods-besteffort-pod372f205a_3efa_4750_8131_59a24016a6d4.slice. Sep 4 04:26:51.522233 kubelet[2728]: I0904 04:26:51.522165 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/372f205a-3efa-4750-8131-59a24016a6d4-typha-certs\") pod \"calico-typha-6d55776bbc-9gpvz\" (UID: \"372f205a-3efa-4750-8131-59a24016a6d4\") " pod="calico-system/calico-typha-6d55776bbc-9gpvz" Sep 4 04:26:51.522233 kubelet[2728]: I0904 04:26:51.522226 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/372f205a-3efa-4750-8131-59a24016a6d4-tigera-ca-bundle\") pod \"calico-typha-6d55776bbc-9gpvz\" (UID: \"372f205a-3efa-4750-8131-59a24016a6d4\") " pod="calico-system/calico-typha-6d55776bbc-9gpvz" Sep 4 04:26:51.522870 kubelet[2728]: I0904 04:26:51.522263 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkm5d\" (UniqueName: \"kubernetes.io/projected/372f205a-3efa-4750-8131-59a24016a6d4-kube-api-access-dkm5d\") pod \"calico-typha-6d55776bbc-9gpvz\" (UID: \"372f205a-3efa-4750-8131-59a24016a6d4\") " pod="calico-system/calico-typha-6d55776bbc-9gpvz" Sep 4 04:26:51.714984 kubelet[2728]: E0904 04:26:51.714935 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:51.715625 containerd[1612]: time="2025-09-04T04:26:51.715580796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d55776bbc-9gpvz,Uid:372f205a-3efa-4750-8131-59a24016a6d4,Namespace:calico-system,Attempt:0,}" Sep 4 04:26:51.908008 systemd[1]: Created slice kubepods-besteffort-pod9e501088_f977_4e96_9426_2baf1ea106b1.slice - libcontainer container kubepods-besteffort-pod9e501088_f977_4e96_9426_2baf1ea106b1.slice. Sep 4 04:26:51.924125 containerd[1612]: time="2025-09-04T04:26:51.924047153Z" level=info msg="connecting to shim c0f261fa5b596e2bacb37ac0eac13bb52d07071eaf04334d71eb9a4141621040" address="unix:///run/containerd/s/4c891fedc195a69e3fa032090d5e8992662c3bab8b9b5964490322ddcb957a2e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:51.967502 systemd[1]: Started cri-containerd-c0f261fa5b596e2bacb37ac0eac13bb52d07071eaf04334d71eb9a4141621040.scope - libcontainer container c0f261fa5b596e2bacb37ac0eac13bb52d07071eaf04334d71eb9a4141621040. Sep 4 04:26:52.022091 containerd[1612]: time="2025-09-04T04:26:52.022021721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d55776bbc-9gpvz,Uid:372f205a-3efa-4750-8131-59a24016a6d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0f261fa5b596e2bacb37ac0eac13bb52d07071eaf04334d71eb9a4141621040\"" Sep 4 04:26:52.023969 kubelet[2728]: E0904 04:26:52.023930 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.025815 containerd[1612]: time="2025-09-04T04:26:52.025777988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 04:26:52.026070 kubelet[2728]: I0904 04:26:52.026036 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-cni-net-dir\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026162 kubelet[2728]: I0904 04:26:52.026131 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-policysync\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026234 kubelet[2728]: I0904 04:26:52.026206 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e501088-f977-4e96-9426-2baf1ea106b1-tigera-ca-bundle\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026319 kubelet[2728]: I0904 04:26:52.026270 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-xtables-lock\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026390 kubelet[2728]: I0904 04:26:52.026336 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjhm\" (UniqueName: \"kubernetes.io/projected/9e501088-f977-4e96-9426-2baf1ea106b1-kube-api-access-7pjhm\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026437 kubelet[2728]: I0904 04:26:52.026394 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-var-lib-calico\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026437 kubelet[2728]: I0904 04:26:52.026431 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9e501088-f977-4e96-9426-2baf1ea106b1-node-certs\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026536 kubelet[2728]: I0904 04:26:52.026484 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-var-run-calico\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026536 kubelet[2728]: I0904 04:26:52.026512 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-cni-log-dir\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026612 kubelet[2728]: I0904 04:26:52.026534 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-cni-bin-dir\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026612 kubelet[2728]: I0904 04:26:52.026594 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-flexvol-driver-host\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.026691 kubelet[2728]: I0904 04:26:52.026620 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e501088-f977-4e96-9426-2baf1ea106b1-lib-modules\") pod \"calico-node-wnrkv\" (UID: \"9e501088-f977-4e96-9426-2baf1ea106b1\") " pod="calico-system/calico-node-wnrkv" Sep 4 04:26:52.126261 kubelet[2728]: E0904 04:26:52.126193 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:26:52.134240 kubelet[2728]: E0904 04:26:52.134191 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.134240 kubelet[2728]: W0904 04:26:52.134216 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.134240 kubelet[2728]: E0904 04:26:52.134252 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.135751 kubelet[2728]: E0904 04:26:52.135505 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.135751 kubelet[2728]: W0904 04:26:52.135522 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.135751 kubelet[2728]: E0904 04:26:52.135533 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.138966 kubelet[2728]: E0904 04:26:52.138062 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.138966 kubelet[2728]: W0904 04:26:52.138088 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.138966 kubelet[2728]: E0904 04:26:52.138110 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.148307 kubelet[2728]: E0904 04:26:52.147922 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.148307 kubelet[2728]: W0904 04:26:52.147949 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.148307 kubelet[2728]: E0904 04:26:52.147971 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.155464 kubelet[2728]: E0904 04:26:52.155438 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.155464 kubelet[2728]: W0904 04:26:52.155459 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.155571 kubelet[2728]: E0904 04:26:52.155483 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.155775 kubelet[2728]: E0904 04:26:52.155749 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.155775 kubelet[2728]: W0904 04:26:52.155764 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.155775 kubelet[2728]: E0904 04:26:52.155773 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.155993 kubelet[2728]: E0904 04:26:52.155975 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.155993 kubelet[2728]: W0904 04:26:52.155987 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.155993 kubelet[2728]: E0904 04:26:52.155995 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.156164 kubelet[2728]: E0904 04:26:52.156149 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.156164 kubelet[2728]: W0904 04:26:52.156159 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.156248 kubelet[2728]: E0904 04:26:52.156168 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.156377 kubelet[2728]: E0904 04:26:52.156357 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.156377 kubelet[2728]: W0904 04:26:52.156368 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.156377 kubelet[2728]: E0904 04:26:52.156376 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.156552 kubelet[2728]: E0904 04:26:52.156534 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.156552 kubelet[2728]: W0904 04:26:52.156544 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.156552 kubelet[2728]: E0904 04:26:52.156554 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.156721 kubelet[2728]: E0904 04:26:52.156703 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.156721 kubelet[2728]: W0904 04:26:52.156713 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.156721 kubelet[2728]: E0904 04:26:52.156721 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.156927 kubelet[2728]: E0904 04:26:52.156907 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.156927 kubelet[2728]: W0904 04:26:52.156918 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.156927 kubelet[2728]: E0904 04:26:52.156926 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.157103 kubelet[2728]: E0904 04:26:52.157085 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.157103 kubelet[2728]: W0904 04:26:52.157095 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.157103 kubelet[2728]: E0904 04:26:52.157103 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.157271 kubelet[2728]: E0904 04:26:52.157253 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.157271 kubelet[2728]: W0904 04:26:52.157263 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.157271 kubelet[2728]: E0904 04:26:52.157270 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.157478 kubelet[2728]: E0904 04:26:52.157455 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.157478 kubelet[2728]: W0904 04:26:52.157471 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.157551 kubelet[2728]: E0904 04:26:52.157487 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.157695 kubelet[2728]: E0904 04:26:52.157674 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.157695 kubelet[2728]: W0904 04:26:52.157687 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.157777 kubelet[2728]: E0904 04:26:52.157696 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.157906 kubelet[2728]: E0904 04:26:52.157884 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.157906 kubelet[2728]: W0904 04:26:52.157898 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.157977 kubelet[2728]: E0904 04:26:52.157909 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.158111 kubelet[2728]: E0904 04:26:52.158091 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.158111 kubelet[2728]: W0904 04:26:52.158102 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.158175 kubelet[2728]: E0904 04:26:52.158114 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.158292 kubelet[2728]: E0904 04:26:52.158268 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.158318 kubelet[2728]: W0904 04:26:52.158277 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.158318 kubelet[2728]: E0904 04:26:52.158303 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.158487 kubelet[2728]: E0904 04:26:52.158469 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.158487 kubelet[2728]: W0904 04:26:52.158480 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.158543 kubelet[2728]: E0904 04:26:52.158490 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.158685 kubelet[2728]: E0904 04:26:52.158668 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.158685 kubelet[2728]: W0904 04:26:52.158678 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.158735 kubelet[2728]: E0904 04:26:52.158687 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.158883 kubelet[2728]: E0904 04:26:52.158863 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.158883 kubelet[2728]: W0904 04:26:52.158873 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.158883 kubelet[2728]: E0904 04:26:52.158881 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.159053 kubelet[2728]: E0904 04:26:52.159036 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.159053 kubelet[2728]: W0904 04:26:52.159046 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.159106 kubelet[2728]: E0904 04:26:52.159054 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.159220 kubelet[2728]: E0904 04:26:52.159203 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.159220 kubelet[2728]: W0904 04:26:52.159214 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.159220 kubelet[2728]: E0904 04:26:52.159221 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.222409 containerd[1612]: time="2025-09-04T04:26:52.221222191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wnrkv,Uid:9e501088-f977-4e96-9426-2baf1ea106b1,Namespace:calico-system,Attempt:0,}" Sep 4 04:26:52.232426 kubelet[2728]: E0904 04:26:52.232342 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.233103 kubelet[2728]: W0904 04:26:52.233069 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.233372 kubelet[2728]: E0904 04:26:52.233343 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.233452 kubelet[2728]: I0904 04:26:52.233390 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/471e8a18-1211-4eda-b37a-0a76de6d8f44-registration-dir\") pod \"csi-node-driver-bhrnv\" (UID: \"471e8a18-1211-4eda-b37a-0a76de6d8f44\") " pod="calico-system/csi-node-driver-bhrnv" Sep 4 04:26:52.234096 kubelet[2728]: E0904 04:26:52.234072 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.234141 kubelet[2728]: W0904 04:26:52.234092 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.234298 kubelet[2728]: E0904 04:26:52.234197 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.235245 kubelet[2728]: E0904 04:26:52.234604 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.235429 kubelet[2728]: W0904 04:26:52.235302 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.235429 kubelet[2728]: E0904 04:26:52.235319 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.235429 kubelet[2728]: I0904 04:26:52.235196 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/471e8a18-1211-4eda-b37a-0a76de6d8f44-socket-dir\") pod \"csi-node-driver-bhrnv\" (UID: \"471e8a18-1211-4eda-b37a-0a76de6d8f44\") " pod="calico-system/csi-node-driver-bhrnv" Sep 4 04:26:52.237034 kubelet[2728]: E0904 04:26:52.235813 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.237034 kubelet[2728]: W0904 04:26:52.237032 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.237315 kubelet[2728]: E0904 04:26:52.237048 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.238276 kubelet[2728]: E0904 04:26:52.238246 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.238276 kubelet[2728]: W0904 04:26:52.238268 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.238716 kubelet[2728]: E0904 04:26:52.238675 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.238766 kubelet[2728]: I0904 04:26:52.238719 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56p2s\" (UniqueName: \"kubernetes.io/projected/471e8a18-1211-4eda-b37a-0a76de6d8f44-kube-api-access-56p2s\") pod \"csi-node-driver-bhrnv\" (UID: \"471e8a18-1211-4eda-b37a-0a76de6d8f44\") " pod="calico-system/csi-node-driver-bhrnv" Sep 4 04:26:52.239398 kubelet[2728]: E0904 04:26:52.239356 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.239398 kubelet[2728]: W0904 04:26:52.239377 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.239398 kubelet[2728]: E0904 04:26:52.239392 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.239744 kubelet[2728]: E0904 04:26:52.239651 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.239744 kubelet[2728]: W0904 04:26:52.239668 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.239744 kubelet[2728]: E0904 04:26:52.239681 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.240739 kubelet[2728]: E0904 04:26:52.240710 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.240739 kubelet[2728]: W0904 04:26:52.240731 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.241027 kubelet[2728]: E0904 04:26:52.240855 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.241027 kubelet[2728]: I0904 04:26:52.240880 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/471e8a18-1211-4eda-b37a-0a76de6d8f44-kubelet-dir\") pod \"csi-node-driver-bhrnv\" (UID: \"471e8a18-1211-4eda-b37a-0a76de6d8f44\") " pod="calico-system/csi-node-driver-bhrnv" Sep 4 04:26:52.242070 kubelet[2728]: E0904 04:26:52.242042 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.242070 kubelet[2728]: W0904 04:26:52.242061 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.242224 kubelet[2728]: E0904 04:26:52.242119 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.242808 kubelet[2728]: E0904 04:26:52.242767 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.242808 kubelet[2728]: W0904 04:26:52.242799 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.242942 kubelet[2728]: E0904 04:26:52.242905 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.243410 kubelet[2728]: E0904 04:26:52.243379 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.243410 kubelet[2728]: W0904 04:26:52.243404 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.243570 kubelet[2728]: E0904 04:26:52.243462 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.243693 kubelet[2728]: E0904 04:26:52.243659 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.243693 kubelet[2728]: W0904 04:26:52.243672 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.243693 kubelet[2728]: E0904 04:26:52.243699 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.243930 kubelet[2728]: I0904 04:26:52.243724 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/471e8a18-1211-4eda-b37a-0a76de6d8f44-varrun\") pod \"csi-node-driver-bhrnv\" (UID: \"471e8a18-1211-4eda-b37a-0a76de6d8f44\") " pod="calico-system/csi-node-driver-bhrnv" Sep 4 04:26:52.244492 kubelet[2728]: E0904 04:26:52.244471 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.244492 kubelet[2728]: W0904 04:26:52.244486 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.244583 kubelet[2728]: E0904 04:26:52.244498 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.244834 kubelet[2728]: E0904 04:26:52.244794 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.244894 kubelet[2728]: W0904 04:26:52.244833 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.244894 kubelet[2728]: E0904 04:26:52.244847 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.245105 kubelet[2728]: E0904 04:26:52.245089 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.245105 kubelet[2728]: W0904 04:26:52.245101 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.245176 kubelet[2728]: E0904 04:26:52.245112 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.267550 containerd[1612]: time="2025-09-04T04:26:52.267477996Z" level=info msg="connecting to shim ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf" address="unix:///run/containerd/s/6ec5065d0c58e402a6e07b54cb1d48f7494fece9b96e1324524f32b7fe86533e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:52.314571 systemd[1]: Started cri-containerd-ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf.scope - libcontainer container ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf. Sep 4 04:26:52.345438 kubelet[2728]: E0904 04:26:52.345345 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.345773 kubelet[2728]: W0904 04:26:52.345608 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.345773 kubelet[2728]: E0904 04:26:52.345638 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.346134 kubelet[2728]: E0904 04:26:52.346106 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.346134 kubelet[2728]: W0904 04:26:52.346118 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.346374 kubelet[2728]: E0904 04:26:52.346221 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.346549 kubelet[2728]: E0904 04:26:52.346535 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.346720 kubelet[2728]: W0904 04:26:52.346630 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.346720 kubelet[2728]: E0904 04:26:52.346666 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.347143 kubelet[2728]: E0904 04:26:52.347044 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.347143 kubelet[2728]: W0904 04:26:52.347056 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.347219 kubelet[2728]: E0904 04:26:52.347135 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.347331 kubelet[2728]: E0904 04:26:52.347316 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.347406 kubelet[2728]: W0904 04:26:52.347389 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.347591 kubelet[2728]: E0904 04:26:52.347508 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.347723 kubelet[2728]: E0904 04:26:52.347690 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.347723 kubelet[2728]: W0904 04:26:52.347703 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.347851 kubelet[2728]: E0904 04:26:52.347810 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.347986 kubelet[2728]: E0904 04:26:52.347969 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.347986 kubelet[2728]: W0904 04:26:52.347982 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.348040 kubelet[2728]: E0904 04:26:52.347999 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.348246 kubelet[2728]: E0904 04:26:52.348189 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.348246 kubelet[2728]: W0904 04:26:52.348201 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.348246 kubelet[2728]: E0904 04:26:52.348216 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.348536 kubelet[2728]: E0904 04:26:52.348484 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.348536 kubelet[2728]: W0904 04:26:52.348493 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.348614 kubelet[2728]: E0904 04:26:52.348543 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.348723 kubelet[2728]: E0904 04:26:52.348702 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.348723 kubelet[2728]: W0904 04:26:52.348716 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.348879 kubelet[2728]: E0904 04:26:52.348776 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.348958 kubelet[2728]: E0904 04:26:52.348908 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.348958 kubelet[2728]: W0904 04:26:52.348918 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.348958 kubelet[2728]: E0904 04:26:52.348946 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.349161 kubelet[2728]: E0904 04:26:52.349139 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.349161 kubelet[2728]: W0904 04:26:52.349156 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.349224 kubelet[2728]: E0904 04:26:52.349210 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.349255 containerd[1612]: time="2025-09-04T04:26:52.349150850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wnrkv,Uid:9e501088-f977-4e96-9426-2baf1ea106b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf\"" Sep 4 04:26:52.349478 kubelet[2728]: E0904 04:26:52.349449 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.349478 kubelet[2728]: W0904 04:26:52.349462 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.349563 kubelet[2728]: E0904 04:26:52.349512 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.349693 kubelet[2728]: E0904 04:26:52.349675 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.349693 kubelet[2728]: W0904 04:26:52.349689 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.349805 kubelet[2728]: E0904 04:26:52.349704 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.349985 kubelet[2728]: E0904 04:26:52.349967 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.349985 kubelet[2728]: W0904 04:26:52.349979 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.350073 kubelet[2728]: E0904 04:26:52.349994 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.350190 kubelet[2728]: E0904 04:26:52.350171 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.350190 kubelet[2728]: W0904 04:26:52.350182 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.350307 kubelet[2728]: E0904 04:26:52.350206 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.350448 kubelet[2728]: E0904 04:26:52.350428 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.350448 kubelet[2728]: W0904 04:26:52.350440 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.350540 kubelet[2728]: E0904 04:26:52.350483 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.350950 kubelet[2728]: E0904 04:26:52.350897 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.350950 kubelet[2728]: W0904 04:26:52.350913 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.351032 kubelet[2728]: E0904 04:26:52.351008 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.351139 kubelet[2728]: E0904 04:26:52.351121 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.351139 kubelet[2728]: W0904 04:26:52.351134 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.351215 kubelet[2728]: E0904 04:26:52.351173 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.351373 kubelet[2728]: E0904 04:26:52.351353 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.351373 kubelet[2728]: W0904 04:26:52.351370 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.351446 kubelet[2728]: E0904 04:26:52.351386 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.351650 kubelet[2728]: E0904 04:26:52.351626 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.351650 kubelet[2728]: W0904 04:26:52.351643 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.351731 kubelet[2728]: E0904 04:26:52.351665 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.351870 kubelet[2728]: E0904 04:26:52.351851 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.351870 kubelet[2728]: W0904 04:26:52.351863 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.351952 kubelet[2728]: E0904 04:26:52.351886 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.352161 kubelet[2728]: E0904 04:26:52.352142 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.352161 kubelet[2728]: W0904 04:26:52.352154 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.352263 kubelet[2728]: E0904 04:26:52.352178 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.352526 kubelet[2728]: E0904 04:26:52.352494 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.352526 kubelet[2728]: W0904 04:26:52.352517 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.352626 kubelet[2728]: E0904 04:26:52.352547 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.352870 kubelet[2728]: E0904 04:26:52.352846 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.352870 kubelet[2728]: W0904 04:26:52.352863 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.352945 kubelet[2728]: E0904 04:26:52.352876 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:52.363092 kubelet[2728]: E0904 04:26:52.363043 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:52.363092 kubelet[2728]: W0904 04:26:52.363072 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:52.363092 kubelet[2728]: E0904 04:26:52.363102 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:53.906002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755651262.mount: Deactivated successfully. Sep 4 04:26:53.972688 kubelet[2728]: E0904 04:26:53.972569 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:26:55.436589 containerd[1612]: time="2025-09-04T04:26:55.436503953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:55.437601 containerd[1612]: time="2025-09-04T04:26:55.437567214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 04:26:55.439089 containerd[1612]: time="2025-09-04T04:26:55.438995459Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:55.443515 containerd[1612]: time="2025-09-04T04:26:55.443447606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:55.444421 containerd[1612]: time="2025-09-04T04:26:55.444362129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.418383894s" Sep 4 04:26:55.444421 containerd[1612]: time="2025-09-04T04:26:55.444413425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 04:26:55.448068 containerd[1612]: time="2025-09-04T04:26:55.448003939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 04:26:55.463398 containerd[1612]: time="2025-09-04T04:26:55.463302529Z" level=info msg="CreateContainer within sandbox \"c0f261fa5b596e2bacb37ac0eac13bb52d07071eaf04334d71eb9a4141621040\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 04:26:55.480243 containerd[1612]: time="2025-09-04T04:26:55.477357019Z" level=info msg="Container 3d771f3d9c692930f31a144a62a2b2c654371e4ffc19eeb4c1ce9e62a10ff0b5: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:55.495111 containerd[1612]: time="2025-09-04T04:26:55.495031819Z" level=info msg="CreateContainer within sandbox \"c0f261fa5b596e2bacb37ac0eac13bb52d07071eaf04334d71eb9a4141621040\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3d771f3d9c692930f31a144a62a2b2c654371e4ffc19eeb4c1ce9e62a10ff0b5\"" Sep 4 04:26:55.495981 containerd[1612]: time="2025-09-04T04:26:55.495920974Z" level=info msg="StartContainer for \"3d771f3d9c692930f31a144a62a2b2c654371e4ffc19eeb4c1ce9e62a10ff0b5\"" Sep 4 04:26:55.499617 containerd[1612]: time="2025-09-04T04:26:55.499561562Z" level=info msg="connecting to shim 3d771f3d9c692930f31a144a62a2b2c654371e4ffc19eeb4c1ce9e62a10ff0b5" address="unix:///run/containerd/s/4c891fedc195a69e3fa032090d5e8992662c3bab8b9b5964490322ddcb957a2e" protocol=ttrpc version=3 Sep 4 04:26:55.541608 systemd[1]: Started cri-containerd-3d771f3d9c692930f31a144a62a2b2c654371e4ffc19eeb4c1ce9e62a10ff0b5.scope - libcontainer container 3d771f3d9c692930f31a144a62a2b2c654371e4ffc19eeb4c1ce9e62a10ff0b5. Sep 4 04:26:55.613603 containerd[1612]: time="2025-09-04T04:26:55.613529141Z" level=info msg="StartContainer for \"3d771f3d9c692930f31a144a62a2b2c654371e4ffc19eeb4c1ce9e62a10ff0b5\" returns successfully" Sep 4 04:26:55.972955 kubelet[2728]: E0904 04:26:55.972882 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:26:56.088730 kubelet[2728]: E0904 04:26:56.088692 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:56.186974 kubelet[2728]: E0904 04:26:56.186913 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.186974 kubelet[2728]: W0904 04:26:56.186943 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.186974 kubelet[2728]: E0904 04:26:56.186972 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.187246 kubelet[2728]: E0904 04:26:56.187225 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.187246 kubelet[2728]: W0904 04:26:56.187237 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.187361 kubelet[2728]: E0904 04:26:56.187248 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.187511 kubelet[2728]: E0904 04:26:56.187477 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.187511 kubelet[2728]: W0904 04:26:56.187492 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.187511 kubelet[2728]: E0904 04:26:56.187502 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.187712 kubelet[2728]: E0904 04:26:56.187688 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.187712 kubelet[2728]: W0904 04:26:56.187701 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.187712 kubelet[2728]: E0904 04:26:56.187712 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.187919 kubelet[2728]: E0904 04:26:56.187898 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.187919 kubelet[2728]: W0904 04:26:56.187910 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.187984 kubelet[2728]: E0904 04:26:56.187923 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.188117 kubelet[2728]: E0904 04:26:56.188093 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.188117 kubelet[2728]: W0904 04:26:56.188105 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.188117 kubelet[2728]: E0904 04:26:56.188115 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.188335 kubelet[2728]: E0904 04:26:56.188313 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.188335 kubelet[2728]: W0904 04:26:56.188326 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.188434 kubelet[2728]: E0904 04:26:56.188336 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.188555 kubelet[2728]: E0904 04:26:56.188531 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.188555 kubelet[2728]: W0904 04:26:56.188543 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.188555 kubelet[2728]: E0904 04:26:56.188553 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.188757 kubelet[2728]: E0904 04:26:56.188736 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.188757 kubelet[2728]: W0904 04:26:56.188748 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.188927 kubelet[2728]: E0904 04:26:56.188760 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.188972 kubelet[2728]: E0904 04:26:56.188938 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.188972 kubelet[2728]: W0904 04:26:56.188948 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.188972 kubelet[2728]: E0904 04:26:56.188957 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.189167 kubelet[2728]: E0904 04:26:56.189146 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.189167 kubelet[2728]: W0904 04:26:56.189158 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.189248 kubelet[2728]: E0904 04:26:56.189169 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.189383 kubelet[2728]: E0904 04:26:56.189363 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.189383 kubelet[2728]: W0904 04:26:56.189376 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.189477 kubelet[2728]: E0904 04:26:56.189386 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.189605 kubelet[2728]: E0904 04:26:56.189584 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.189605 kubelet[2728]: W0904 04:26:56.189597 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.189682 kubelet[2728]: E0904 04:26:56.189608 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.189822 kubelet[2728]: E0904 04:26:56.189798 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.189822 kubelet[2728]: W0904 04:26:56.189810 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.189822 kubelet[2728]: E0904 04:26:56.189820 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.190036 kubelet[2728]: E0904 04:26:56.190016 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.190036 kubelet[2728]: W0904 04:26:56.190028 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.190114 kubelet[2728]: E0904 04:26:56.190038 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.282645 kubelet[2728]: E0904 04:26:56.281826 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.282645 kubelet[2728]: W0904 04:26:56.281857 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.282645 kubelet[2728]: E0904 04:26:56.281883 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.282645 kubelet[2728]: E0904 04:26:56.282104 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.282645 kubelet[2728]: W0904 04:26:56.282112 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.282645 kubelet[2728]: E0904 04:26:56.282126 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.282645 kubelet[2728]: E0904 04:26:56.282399 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.282645 kubelet[2728]: W0904 04:26:56.282441 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.282645 kubelet[2728]: E0904 04:26:56.282484 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.283086 kubelet[2728]: E0904 04:26:56.282822 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.283086 kubelet[2728]: W0904 04:26:56.282834 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.283086 kubelet[2728]: E0904 04:26:56.282859 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.283273 kubelet[2728]: E0904 04:26:56.283250 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.283273 kubelet[2728]: W0904 04:26:56.283264 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.283374 kubelet[2728]: E0904 04:26:56.283296 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.283540 kubelet[2728]: E0904 04:26:56.283514 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.283540 kubelet[2728]: W0904 04:26:56.283528 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.283598 kubelet[2728]: E0904 04:26:56.283564 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.283730 kubelet[2728]: E0904 04:26:56.283710 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.283730 kubelet[2728]: W0904 04:26:56.283725 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.283780 kubelet[2728]: E0904 04:26:56.283769 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.283941 kubelet[2728]: E0904 04:26:56.283923 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.283941 kubelet[2728]: W0904 04:26:56.283935 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.284014 kubelet[2728]: E0904 04:26:56.283994 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.284177 kubelet[2728]: E0904 04:26:56.284161 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.284177 kubelet[2728]: W0904 04:26:56.284173 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.284241 kubelet[2728]: E0904 04:26:56.284188 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.284546 kubelet[2728]: E0904 04:26:56.284528 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.284546 kubelet[2728]: W0904 04:26:56.284543 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.284625 kubelet[2728]: E0904 04:26:56.284559 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.284749 kubelet[2728]: E0904 04:26:56.284735 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.284773 kubelet[2728]: W0904 04:26:56.284747 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.284773 kubelet[2728]: E0904 04:26:56.284764 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.285004 kubelet[2728]: E0904 04:26:56.284986 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.285004 kubelet[2728]: W0904 04:26:56.285000 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.285072 kubelet[2728]: E0904 04:26:56.285015 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.285312 kubelet[2728]: E0904 04:26:56.285273 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.285312 kubelet[2728]: W0904 04:26:56.285302 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.285391 kubelet[2728]: E0904 04:26:56.285320 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.285532 kubelet[2728]: E0904 04:26:56.285514 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.285532 kubelet[2728]: W0904 04:26:56.285526 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.285622 kubelet[2728]: E0904 04:26:56.285541 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.285795 kubelet[2728]: E0904 04:26:56.285775 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.285795 kubelet[2728]: W0904 04:26:56.285786 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.285864 kubelet[2728]: E0904 04:26:56.285817 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.286007 kubelet[2728]: E0904 04:26:56.285987 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.286007 kubelet[2728]: W0904 04:26:56.286002 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.286092 kubelet[2728]: E0904 04:26:56.286047 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.286233 kubelet[2728]: E0904 04:26:56.286208 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.286233 kubelet[2728]: W0904 04:26:56.286227 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.286348 kubelet[2728]: E0904 04:26:56.286245 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.286541 kubelet[2728]: E0904 04:26:56.286525 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:56.286541 kubelet[2728]: W0904 04:26:56.286538 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:56.286616 kubelet[2728]: E0904 04:26:56.286551 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:56.394121 kubelet[2728]: I0904 04:26:56.393929 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d55776bbc-9gpvz" podStartSLOduration=1.971558281 podStartE2EDuration="5.39390165s" podCreationTimestamp="2025-09-04 04:26:51 +0000 UTC" firstStartedPulling="2025-09-04 04:26:52.025376458 +0000 UTC m=+20.290286102" lastFinishedPulling="2025-09-04 04:26:55.447719827 +0000 UTC m=+23.712629471" observedRunningTime="2025-09-04 04:26:56.393416281 +0000 UTC m=+24.658325925" watchObservedRunningTime="2025-09-04 04:26:56.39390165 +0000 UTC m=+24.658811294" Sep 4 04:26:57.091154 kubelet[2728]: E0904 04:26:57.091080 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:57.097135 kubelet[2728]: E0904 04:26:57.097083 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.097135 kubelet[2728]: W0904 04:26:57.097121 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.097400 kubelet[2728]: E0904 04:26:57.097157 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.097566 kubelet[2728]: E0904 04:26:57.097545 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.097605 kubelet[2728]: W0904 04:26:57.097566 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.097605 kubelet[2728]: E0904 04:26:57.097580 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.097797 kubelet[2728]: E0904 04:26:57.097779 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.097797 kubelet[2728]: W0904 04:26:57.097793 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.097865 kubelet[2728]: E0904 04:26:57.097803 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.098048 kubelet[2728]: E0904 04:26:57.098030 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.098048 kubelet[2728]: W0904 04:26:57.098046 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.098115 kubelet[2728]: E0904 04:26:57.098059 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.098427 kubelet[2728]: E0904 04:26:57.098355 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.098427 kubelet[2728]: W0904 04:26:57.098381 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.098427 kubelet[2728]: E0904 04:26:57.098438 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.098678 kubelet[2728]: E0904 04:26:57.098669 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.098712 kubelet[2728]: W0904 04:26:57.098678 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.098712 kubelet[2728]: E0904 04:26:57.098688 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.098910 kubelet[2728]: E0904 04:26:57.098888 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.098910 kubelet[2728]: W0904 04:26:57.098906 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.098910 kubelet[2728]: E0904 04:26:57.098920 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.099525 kubelet[2728]: E0904 04:26:57.099140 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.099525 kubelet[2728]: W0904 04:26:57.099158 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.099525 kubelet[2728]: E0904 04:26:57.099168 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.099525 kubelet[2728]: E0904 04:26:57.099395 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.099525 kubelet[2728]: W0904 04:26:57.099404 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.099525 kubelet[2728]: E0904 04:26:57.099413 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.099700 kubelet[2728]: E0904 04:26:57.099629 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.099700 kubelet[2728]: W0904 04:26:57.099643 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.099700 kubelet[2728]: E0904 04:26:57.099658 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.099882 kubelet[2728]: E0904 04:26:57.099859 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.099882 kubelet[2728]: W0904 04:26:57.099873 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.099882 kubelet[2728]: E0904 04:26:57.099885 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.100091 kubelet[2728]: E0904 04:26:57.100074 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.100091 kubelet[2728]: W0904 04:26:57.100086 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.100144 kubelet[2728]: E0904 04:26:57.100095 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.100338 kubelet[2728]: E0904 04:26:57.100314 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.100338 kubelet[2728]: W0904 04:26:57.100326 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.100338 kubelet[2728]: E0904 04:26:57.100334 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.100621 kubelet[2728]: E0904 04:26:57.100579 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.100621 kubelet[2728]: W0904 04:26:57.100610 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.100707 kubelet[2728]: E0904 04:26:57.100645 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.100880 kubelet[2728]: E0904 04:26:57.100863 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.100880 kubelet[2728]: W0904 04:26:57.100873 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.100880 kubelet[2728]: E0904 04:26:57.100882 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.191232 kubelet[2728]: E0904 04:26:57.191176 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.191438 kubelet[2728]: W0904 04:26:57.191237 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.191438 kubelet[2728]: E0904 04:26:57.191333 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.191773 kubelet[2728]: E0904 04:26:57.191747 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.191773 kubelet[2728]: W0904 04:26:57.191764 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.191915 kubelet[2728]: E0904 04:26:57.191804 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.192098 kubelet[2728]: E0904 04:26:57.192076 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.192098 kubelet[2728]: W0904 04:26:57.192091 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.192267 kubelet[2728]: E0904 04:26:57.192108 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.192467 kubelet[2728]: E0904 04:26:57.192444 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.192467 kubelet[2728]: W0904 04:26:57.192459 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.192694 kubelet[2728]: E0904 04:26:57.192478 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.192898 kubelet[2728]: E0904 04:26:57.192875 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.192898 kubelet[2728]: W0904 04:26:57.192889 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.193101 kubelet[2728]: E0904 04:26:57.193037 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.193343 kubelet[2728]: E0904 04:26:57.193311 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.193343 kubelet[2728]: W0904 04:26:57.193334 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.193621 kubelet[2728]: E0904 04:26:57.193534 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.194277 kubelet[2728]: E0904 04:26:57.193590 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.194277 kubelet[2728]: W0904 04:26:57.194167 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.194843 kubelet[2728]: E0904 04:26:57.194515 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.194998 kubelet[2728]: E0904 04:26:57.194978 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.194998 kubelet[2728]: W0904 04:26:57.194994 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.195222 kubelet[2728]: E0904 04:26:57.195047 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.195321 kubelet[2728]: E0904 04:26:57.195299 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.195321 kubelet[2728]: W0904 04:26:57.195316 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.195460 kubelet[2728]: E0904 04:26:57.195437 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.196535 kubelet[2728]: E0904 04:26:57.196496 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.196535 kubelet[2728]: W0904 04:26:57.196516 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.196535 kubelet[2728]: E0904 04:26:57.196543 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.197043 kubelet[2728]: E0904 04:26:57.197006 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.198374 kubelet[2728]: W0904 04:26:57.198342 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.198662 kubelet[2728]: E0904 04:26:57.198404 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.198662 kubelet[2728]: E0904 04:26:57.198643 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.198662 kubelet[2728]: W0904 04:26:57.198656 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.198763 kubelet[2728]: E0904 04:26:57.198742 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.198969 kubelet[2728]: E0904 04:26:57.198946 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.198969 kubelet[2728]: W0904 04:26:57.198965 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.199093 kubelet[2728]: E0904 04:26:57.199070 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.199413 kubelet[2728]: E0904 04:26:57.199355 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.199413 kubelet[2728]: W0904 04:26:57.199380 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.199413 kubelet[2728]: E0904 04:26:57.199414 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.200041 kubelet[2728]: E0904 04:26:57.199978 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.200041 kubelet[2728]: W0904 04:26:57.199994 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.200041 kubelet[2728]: E0904 04:26:57.200011 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.200326 kubelet[2728]: E0904 04:26:57.200262 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.200326 kubelet[2728]: W0904 04:26:57.200277 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.200793 kubelet[2728]: E0904 04:26:57.200346 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.200793 kubelet[2728]: E0904 04:26:57.200607 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.200793 kubelet[2728]: W0904 04:26:57.200639 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.200793 kubelet[2728]: E0904 04:26:57.200657 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.201995 kubelet[2728]: E0904 04:26:57.201975 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 04:26:57.201995 kubelet[2728]: W0904 04:26:57.201990 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 04:26:57.202115 kubelet[2728]: E0904 04:26:57.202002 2728 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 04:26:57.365102 containerd[1612]: time="2025-09-04T04:26:57.364917840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:57.366871 containerd[1612]: time="2025-09-04T04:26:57.366649884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 04:26:57.368437 containerd[1612]: time="2025-09-04T04:26:57.368392808Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:57.370835 containerd[1612]: time="2025-09-04T04:26:57.370773757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:57.371417 containerd[1612]: time="2025-09-04T04:26:57.371348243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.923284783s" Sep 4 04:26:57.371417 containerd[1612]: time="2025-09-04T04:26:57.371407223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 04:26:57.373892 containerd[1612]: time="2025-09-04T04:26:57.373841512Z" level=info msg="CreateContainer within sandbox \"ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 04:26:57.384072 containerd[1612]: time="2025-09-04T04:26:57.383997412Z" level=info msg="Container fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:57.396842 containerd[1612]: time="2025-09-04T04:26:57.396776376Z" level=info msg="CreateContainer within sandbox \"ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01\"" Sep 4 04:26:57.397615 containerd[1612]: time="2025-09-04T04:26:57.397586433Z" level=info msg="StartContainer for \"fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01\"" Sep 4 04:26:57.399305 containerd[1612]: time="2025-09-04T04:26:57.399244037Z" level=info msg="connecting to shim fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01" address="unix:///run/containerd/s/6ec5065d0c58e402a6e07b54cb1d48f7494fece9b96e1324524f32b7fe86533e" protocol=ttrpc version=3 Sep 4 04:26:57.427497 systemd[1]: Started cri-containerd-fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01.scope - libcontainer container fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01. Sep 4 04:26:57.502806 systemd[1]: cri-containerd-fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01.scope: Deactivated successfully. Sep 4 04:26:57.506357 containerd[1612]: time="2025-09-04T04:26:57.506235786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01\" id:\"fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01\" pid:3476 exited_at:{seconds:1756960017 nanos:504845943}" Sep 4 04:26:57.513010 containerd[1612]: time="2025-09-04T04:26:57.512941143Z" level=info msg="received exit event container_id:\"fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01\" id:\"fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01\" pid:3476 exited_at:{seconds:1756960017 nanos:504845943}" Sep 4 04:26:57.514705 containerd[1612]: time="2025-09-04T04:26:57.514657618Z" level=info msg="StartContainer for \"fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01\" returns successfully" Sep 4 04:26:57.542344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbf3f3f52294768fb5630af52a43d7361a6e0e0fbb76fe78b8cc629535609e01-rootfs.mount: Deactivated successfully. Sep 4 04:26:57.972469 kubelet[2728]: E0904 04:26:57.972391 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:26:58.095235 kubelet[2728]: E0904 04:26:58.095191 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:58.096402 containerd[1612]: time="2025-09-04T04:26:58.096335403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 04:26:59.972416 kubelet[2728]: E0904 04:26:59.972347 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:27:01.972396 kubelet[2728]: E0904 04:27:01.972251 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:27:03.037913 containerd[1612]: time="2025-09-04T04:27:03.037835305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:03.038675 containerd[1612]: time="2025-09-04T04:27:03.038630013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 04:27:03.039777 containerd[1612]: time="2025-09-04T04:27:03.039742838Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:03.042022 containerd[1612]: time="2025-09-04T04:27:03.041975029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:03.042731 containerd[1612]: time="2025-09-04T04:27:03.042679740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.946287159s" Sep 4 04:27:03.042731 containerd[1612]: time="2025-09-04T04:27:03.042725325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 04:27:03.051535 containerd[1612]: time="2025-09-04T04:27:03.051477349Z" level=info msg="CreateContainer within sandbox \"ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 04:27:03.062193 containerd[1612]: time="2025-09-04T04:27:03.062123491Z" level=info msg="Container 9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:03.073233 containerd[1612]: time="2025-09-04T04:27:03.073154504Z" level=info msg="CreateContainer within sandbox \"ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96\"" Sep 4 04:27:03.073847 containerd[1612]: time="2025-09-04T04:27:03.073814470Z" level=info msg="StartContainer for \"9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96\"" Sep 4 04:27:03.075504 containerd[1612]: time="2025-09-04T04:27:03.075475672Z" level=info msg="connecting to shim 9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96" address="unix:///run/containerd/s/6ec5065d0c58e402a6e07b54cb1d48f7494fece9b96e1324524f32b7fe86533e" protocol=ttrpc version=3 Sep 4 04:27:03.105555 systemd[1]: Started cri-containerd-9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96.scope - libcontainer container 9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96. Sep 4 04:27:03.159257 containerd[1612]: time="2025-09-04T04:27:03.159202526Z" level=info msg="StartContainer for \"9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96\" returns successfully" Sep 4 04:27:03.972105 kubelet[2728]: E0904 04:27:03.972042 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:27:05.015262 containerd[1612]: time="2025-09-04T04:27:05.015197697Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 04:27:05.018891 systemd[1]: cri-containerd-9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96.scope: Deactivated successfully. Sep 4 04:27:05.019371 systemd[1]: cri-containerd-9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96.scope: Consumed 701ms CPU time, 181.9M memory peak, 3.9M read from disk, 171.3M written to disk. Sep 4 04:27:05.020006 containerd[1612]: time="2025-09-04T04:27:05.019973243Z" level=info msg="received exit event container_id:\"9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96\" id:\"9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96\" pid:3537 exited_at:{seconds:1756960025 nanos:19719398}" Sep 4 04:27:05.020369 containerd[1612]: time="2025-09-04T04:27:05.020325974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96\" id:\"9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96\" pid:3537 exited_at:{seconds:1756960025 nanos:19719398}" Sep 4 04:27:05.046757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d96062a74b96c8acdedfedc25cb2180d334b80e591464150deb6f644f92fc96-rootfs.mount: Deactivated successfully. Sep 4 04:27:05.106249 kubelet[2728]: I0904 04:27:05.106202 2728 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 04:27:05.239820 systemd[1]: Created slice kubepods-burstable-podfc310726_94cb_4b24_8d5b_a83ff5c37bf9.slice - libcontainer container kubepods-burstable-podfc310726_94cb_4b24_8d5b_a83ff5c37bf9.slice. Sep 4 04:27:05.278243 systemd[1]: Created slice kubepods-besteffort-pod43cb4990_cf2d_4baf_b7d5_c875eeaa23e6.slice - libcontainer container kubepods-besteffort-pod43cb4990_cf2d_4baf_b7d5_c875eeaa23e6.slice. Sep 4 04:27:05.286246 systemd[1]: Created slice kubepods-besteffort-pod41c81eb9_036f_44f8_aa46_f56bea288457.slice - libcontainer container kubepods-besteffort-pod41c81eb9_036f_44f8_aa46_f56bea288457.slice. Sep 4 04:27:05.294104 systemd[1]: Created slice kubepods-besteffort-pod70ee29dd_1950_4e60_940b_ea22b976f88f.slice - libcontainer container kubepods-besteffort-pod70ee29dd_1950_4e60_940b_ea22b976f88f.slice. Sep 4 04:27:05.304703 systemd[1]: Created slice kubepods-burstable-podceca6a0d_4411_4bf5_9886_bc8bec807f34.slice - libcontainer container kubepods-burstable-podceca6a0d_4411_4bf5_9886_bc8bec807f34.slice. Sep 4 04:27:05.313463 systemd[1]: Created slice kubepods-besteffort-poddb01949a_0c6e_4bba_af1d_bcb364ba4424.slice - libcontainer container kubepods-besteffort-poddb01949a_0c6e_4bba_af1d_bcb364ba4424.slice. Sep 4 04:27:05.318032 systemd[1]: Created slice kubepods-besteffort-pod9a23b40c_a5ee_4ca1_98cf_6891fa20aa3f.slice - libcontainer container kubepods-besteffort-pod9a23b40c_a5ee_4ca1_98cf_6891fa20aa3f.slice. Sep 4 04:27:05.347310 kubelet[2728]: I0904 04:27:05.347237 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc310726-94cb-4b24-8d5b-a83ff5c37bf9-config-volume\") pod \"coredns-7c65d6cfc9-ncsgv\" (UID: \"fc310726-94cb-4b24-8d5b-a83ff5c37bf9\") " pod="kube-system/coredns-7c65d6cfc9-ncsgv" Sep 4 04:27:05.347310 kubelet[2728]: I0904 04:27:05.347311 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2nbr\" (UniqueName: \"kubernetes.io/projected/fc310726-94cb-4b24-8d5b-a83ff5c37bf9-kube-api-access-l2nbr\") pod \"coredns-7c65d6cfc9-ncsgv\" (UID: \"fc310726-94cb-4b24-8d5b-a83ff5c37bf9\") " pod="kube-system/coredns-7c65d6cfc9-ncsgv" Sep 4 04:27:05.447826 kubelet[2728]: I0904 04:27:05.447764 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f-calico-apiserver-certs\") pod \"calico-apiserver-7676c47945-x89mv\" (UID: \"9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f\") " pod="calico-apiserver/calico-apiserver-7676c47945-x89mv" Sep 4 04:27:05.447826 kubelet[2728]: I0904 04:27:05.447821 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-backend-key-pair\") pod \"whisker-6d976b6dcd-xfldh\" (UID: \"db01949a-0c6e-4bba-af1d-bcb364ba4424\") " pod="calico-system/whisker-6d976b6dcd-xfldh" Sep 4 04:27:05.448039 kubelet[2728]: I0904 04:27:05.447845 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43cb4990-cf2d-4baf-b7d5-c875eeaa23e6-goldmane-ca-bundle\") pod \"goldmane-7988f88666-kfp9d\" (UID: \"43cb4990-cf2d-4baf-b7d5-c875eeaa23e6\") " pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:05.448039 kubelet[2728]: I0904 04:27:05.447884 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktxh\" (UniqueName: \"kubernetes.io/projected/41c81eb9-036f-44f8-aa46-f56bea288457-kube-api-access-qktxh\") pod \"calico-apiserver-7676c47945-gdttk\" (UID: \"41c81eb9-036f-44f8-aa46-f56bea288457\") " pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" Sep 4 04:27:05.448039 kubelet[2728]: I0904 04:27:05.447926 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43cb4990-cf2d-4baf-b7d5-c875eeaa23e6-config\") pod \"goldmane-7988f88666-kfp9d\" (UID: \"43cb4990-cf2d-4baf-b7d5-c875eeaa23e6\") " pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:05.448039 kubelet[2728]: I0904 04:27:05.447966 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41c81eb9-036f-44f8-aa46-f56bea288457-calico-apiserver-certs\") pod \"calico-apiserver-7676c47945-gdttk\" (UID: \"41c81eb9-036f-44f8-aa46-f56bea288457\") " pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" Sep 4 04:27:05.448039 kubelet[2728]: I0904 04:27:05.448004 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70ee29dd-1950-4e60-940b-ea22b976f88f-tigera-ca-bundle\") pod \"calico-kube-controllers-854984d5c7-kpvmr\" (UID: \"70ee29dd-1950-4e60-940b-ea22b976f88f\") " pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" Sep 4 04:27:05.448173 kubelet[2728]: I0904 04:27:05.448041 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664pp\" (UniqueName: \"kubernetes.io/projected/43cb4990-cf2d-4baf-b7d5-c875eeaa23e6-kube-api-access-664pp\") pod \"goldmane-7988f88666-kfp9d\" (UID: \"43cb4990-cf2d-4baf-b7d5-c875eeaa23e6\") " pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:05.448173 kubelet[2728]: I0904 04:27:05.448127 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-ca-bundle\") pod \"whisker-6d976b6dcd-xfldh\" (UID: \"db01949a-0c6e-4bba-af1d-bcb364ba4424\") " pod="calico-system/whisker-6d976b6dcd-xfldh" Sep 4 04:27:05.448223 kubelet[2728]: I0904 04:27:05.448169 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bgk\" (UniqueName: \"kubernetes.io/projected/db01949a-0c6e-4bba-af1d-bcb364ba4424-kube-api-access-b4bgk\") pod \"whisker-6d976b6dcd-xfldh\" (UID: \"db01949a-0c6e-4bba-af1d-bcb364ba4424\") " pod="calico-system/whisker-6d976b6dcd-xfldh" Sep 4 04:27:05.448223 kubelet[2728]: I0904 04:27:05.448211 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceca6a0d-4411-4bf5-9886-bc8bec807f34-config-volume\") pod \"coredns-7c65d6cfc9-hlfrf\" (UID: \"ceca6a0d-4411-4bf5-9886-bc8bec807f34\") " pod="kube-system/coredns-7c65d6cfc9-hlfrf" Sep 4 04:27:05.448302 kubelet[2728]: I0904 04:27:05.448236 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j54cd\" (UniqueName: \"kubernetes.io/projected/ceca6a0d-4411-4bf5-9886-bc8bec807f34-kube-api-access-j54cd\") pod \"coredns-7c65d6cfc9-hlfrf\" (UID: \"ceca6a0d-4411-4bf5-9886-bc8bec807f34\") " pod="kube-system/coredns-7c65d6cfc9-hlfrf" Sep 4 04:27:05.448302 kubelet[2728]: I0904 04:27:05.448257 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhgxl\" (UniqueName: \"kubernetes.io/projected/9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f-kube-api-access-rhgxl\") pod \"calico-apiserver-7676c47945-x89mv\" (UID: \"9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f\") " pod="calico-apiserver/calico-apiserver-7676c47945-x89mv" Sep 4 04:27:05.448356 kubelet[2728]: I0904 04:27:05.448300 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/43cb4990-cf2d-4baf-b7d5-c875eeaa23e6-goldmane-key-pair\") pod \"goldmane-7988f88666-kfp9d\" (UID: \"43cb4990-cf2d-4baf-b7d5-c875eeaa23e6\") " pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:05.448356 kubelet[2728]: I0904 04:27:05.448329 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrskg\" (UniqueName: \"kubernetes.io/projected/70ee29dd-1950-4e60-940b-ea22b976f88f-kube-api-access-vrskg\") pod \"calico-kube-controllers-854984d5c7-kpvmr\" (UID: \"70ee29dd-1950-4e60-940b-ea22b976f88f\") " pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" Sep 4 04:27:05.543576 kubelet[2728]: E0904 04:27:05.543423 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:05.544528 containerd[1612]: time="2025-09-04T04:27:05.544433897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncsgv,Uid:fc310726-94cb-4b24-8d5b-a83ff5c37bf9,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:05.584541 containerd[1612]: time="2025-09-04T04:27:05.584474784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kfp9d,Uid:43cb4990-cf2d-4baf-b7d5-c875eeaa23e6,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:05.591839 containerd[1612]: time="2025-09-04T04:27:05.591758077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-gdttk,Uid:41c81eb9-036f-44f8-aa46-f56bea288457,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:27:05.597860 containerd[1612]: time="2025-09-04T04:27:05.597828059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854984d5c7-kpvmr,Uid:70ee29dd-1950-4e60-940b-ea22b976f88f,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:05.610542 kubelet[2728]: E0904 04:27:05.610479 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:05.611493 containerd[1612]: time="2025-09-04T04:27:05.611443504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlfrf,Uid:ceca6a0d-4411-4bf5-9886-bc8bec807f34,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:05.617081 containerd[1612]: time="2025-09-04T04:27:05.617043063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d976b6dcd-xfldh,Uid:db01949a-0c6e-4bba-af1d-bcb364ba4424,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:05.620594 containerd[1612]: time="2025-09-04T04:27:05.620550824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-x89mv,Uid:9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:27:05.640879 containerd[1612]: time="2025-09-04T04:27:05.640839851Z" level=error msg="Failed to destroy network for sandbox \"a2cf1eb34d13cc0fdeb0055c5dd11c3b42f217af15baeaa73a6be7af16b7121c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.695476 containerd[1612]: time="2025-09-04T04:27:05.695395698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncsgv,Uid:fc310726-94cb-4b24-8d5b-a83ff5c37bf9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cf1eb34d13cc0fdeb0055c5dd11c3b42f217af15baeaa73a6be7af16b7121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.697183 kubelet[2728]: E0904 04:27:05.697107 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cf1eb34d13cc0fdeb0055c5dd11c3b42f217af15baeaa73a6be7af16b7121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.697319 kubelet[2728]: E0904 04:27:05.697221 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cf1eb34d13cc0fdeb0055c5dd11c3b42f217af15baeaa73a6be7af16b7121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ncsgv" Sep 4 04:27:05.697319 kubelet[2728]: E0904 04:27:05.697250 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cf1eb34d13cc0fdeb0055c5dd11c3b42f217af15baeaa73a6be7af16b7121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ncsgv" Sep 4 04:27:05.697639 kubelet[2728]: E0904 04:27:05.697359 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ncsgv_kube-system(fc310726-94cb-4b24-8d5b-a83ff5c37bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ncsgv_kube-system(fc310726-94cb-4b24-8d5b-a83ff5c37bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2cf1eb34d13cc0fdeb0055c5dd11c3b42f217af15baeaa73a6be7af16b7121c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ncsgv" podUID="fc310726-94cb-4b24-8d5b-a83ff5c37bf9" Sep 4 04:27:05.788798 containerd[1612]: time="2025-09-04T04:27:05.788618379Z" level=error msg="Failed to destroy network for sandbox \"0c528027bba352c4cf918bbe9002cdf8c6f57143bb68396fd318947b02000e51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.804442 containerd[1612]: time="2025-09-04T04:27:05.804213152Z" level=error msg="Failed to destroy network for sandbox \"f057038d50be96a4db63a2243bc7334c808bae97b395997963c0deba74174690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.805313 containerd[1612]: time="2025-09-04T04:27:05.805236860Z" level=error msg="Failed to destroy network for sandbox \"9bbb26b0343de0a4e79ca94366056e5df460ae687b9c75558fc928ec4e6abbd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.808568 containerd[1612]: time="2025-09-04T04:27:05.808464527Z" level=error msg="Failed to destroy network for sandbox \"b09851aecb342a931034482fe782caeca9b25da6df8a6b4a72f1183b31222219\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.810396 containerd[1612]: time="2025-09-04T04:27:05.810363724Z" level=error msg="Failed to destroy network for sandbox \"09e1fb2851c5889b4bf5881f77192fbdf74057f1a5026032c0f325fe1da9486a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.821099 containerd[1612]: time="2025-09-04T04:27:05.821041136Z" level=error msg="Failed to destroy network for sandbox \"917f124a72c100c7e284d453019dc46bbb92d72c71ae522ec4b8faaac12c2bc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.829964 containerd[1612]: time="2025-09-04T04:27:05.829888138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kfp9d,Uid:43cb4990-cf2d-4baf-b7d5-c875eeaa23e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c528027bba352c4cf918bbe9002cdf8c6f57143bb68396fd318947b02000e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.830338 kubelet[2728]: E0904 04:27:05.830258 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c528027bba352c4cf918bbe9002cdf8c6f57143bb68396fd318947b02000e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.830420 kubelet[2728]: E0904 04:27:05.830376 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c528027bba352c4cf918bbe9002cdf8c6f57143bb68396fd318947b02000e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:05.830420 kubelet[2728]: E0904 04:27:05.830404 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c528027bba352c4cf918bbe9002cdf8c6f57143bb68396fd318947b02000e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:05.830495 kubelet[2728]: E0904 04:27:05.830470 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-kfp9d_calico-system(43cb4990-cf2d-4baf-b7d5-c875eeaa23e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-kfp9d_calico-system(43cb4990-cf2d-4baf-b7d5-c875eeaa23e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c528027bba352c4cf918bbe9002cdf8c6f57143bb68396fd318947b02000e51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-kfp9d" podUID="43cb4990-cf2d-4baf-b7d5-c875eeaa23e6" Sep 4 04:27:05.832417 containerd[1612]: time="2025-09-04T04:27:05.832356683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854984d5c7-kpvmr,Uid:70ee29dd-1950-4e60-940b-ea22b976f88f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f057038d50be96a4db63a2243bc7334c808bae97b395997963c0deba74174690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.832792 kubelet[2728]: E0904 04:27:05.832742 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f057038d50be96a4db63a2243bc7334c808bae97b395997963c0deba74174690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.832854 kubelet[2728]: E0904 04:27:05.832791 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f057038d50be96a4db63a2243bc7334c808bae97b395997963c0deba74174690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" Sep 4 04:27:05.832854 kubelet[2728]: E0904 04:27:05.832814 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f057038d50be96a4db63a2243bc7334c808bae97b395997963c0deba74174690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" Sep 4 04:27:05.832975 kubelet[2728]: E0904 04:27:05.832861 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-854984d5c7-kpvmr_calico-system(70ee29dd-1950-4e60-940b-ea22b976f88f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-854984d5c7-kpvmr_calico-system(70ee29dd-1950-4e60-940b-ea22b976f88f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f057038d50be96a4db63a2243bc7334c808bae97b395997963c0deba74174690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" podUID="70ee29dd-1950-4e60-940b-ea22b976f88f" Sep 4 04:27:05.833964 containerd[1612]: time="2025-09-04T04:27:05.833873113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlfrf,Uid:ceca6a0d-4411-4bf5-9886-bc8bec807f34,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbb26b0343de0a4e79ca94366056e5df460ae687b9c75558fc928ec4e6abbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.834317 kubelet[2728]: E0904 04:27:05.834018 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbb26b0343de0a4e79ca94366056e5df460ae687b9c75558fc928ec4e6abbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.834317 kubelet[2728]: E0904 04:27:05.834054 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbb26b0343de0a4e79ca94366056e5df460ae687b9c75558fc928ec4e6abbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlfrf" Sep 4 04:27:05.834317 kubelet[2728]: E0904 04:27:05.834074 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbb26b0343de0a4e79ca94366056e5df460ae687b9c75558fc928ec4e6abbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlfrf" Sep 4 04:27:05.834522 kubelet[2728]: E0904 04:27:05.834115 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hlfrf_kube-system(ceca6a0d-4411-4bf5-9886-bc8bec807f34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hlfrf_kube-system(ceca6a0d-4411-4bf5-9886-bc8bec807f34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bbb26b0343de0a4e79ca94366056e5df460ae687b9c75558fc928ec4e6abbd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hlfrf" podUID="ceca6a0d-4411-4bf5-9886-bc8bec807f34" Sep 4 04:27:05.907208 containerd[1612]: time="2025-09-04T04:27:05.907122188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-gdttk,Uid:41c81eb9-036f-44f8-aa46-f56bea288457,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09851aecb342a931034482fe782caeca9b25da6df8a6b4a72f1183b31222219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.907512 kubelet[2728]: E0904 04:27:05.907462 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09851aecb342a931034482fe782caeca9b25da6df8a6b4a72f1183b31222219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:05.907607 kubelet[2728]: E0904 04:27:05.907539 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09851aecb342a931034482fe782caeca9b25da6df8a6b4a72f1183b31222219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" Sep 4 04:27:05.907607 kubelet[2728]: E0904 04:27:05.907561 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09851aecb342a931034482fe782caeca9b25da6df8a6b4a72f1183b31222219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" Sep 4 04:27:05.907686 kubelet[2728]: E0904 04:27:05.907610 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7676c47945-gdttk_calico-apiserver(41c81eb9-036f-44f8-aa46-f56bea288457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7676c47945-gdttk_calico-apiserver(41c81eb9-036f-44f8-aa46-f56bea288457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b09851aecb342a931034482fe782caeca9b25da6df8a6b4a72f1183b31222219\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" podUID="41c81eb9-036f-44f8-aa46-f56bea288457" Sep 4 04:27:05.978189 systemd[1]: Created slice kubepods-besteffort-pod471e8a18_1211_4eda_b37a_0a76de6d8f44.slice - libcontainer container kubepods-besteffort-pod471e8a18_1211_4eda_b37a_0a76de6d8f44.slice. Sep 4 04:27:05.980486 containerd[1612]: time="2025-09-04T04:27:05.980446514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhrnv,Uid:471e8a18-1211-4eda-b37a-0a76de6d8f44,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:06.021360 containerd[1612]: time="2025-09-04T04:27:06.021266091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-x89mv,Uid:9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e1fb2851c5889b4bf5881f77192fbdf74057f1a5026032c0f325fe1da9486a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:06.021906 kubelet[2728]: E0904 04:27:06.021597 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e1fb2851c5889b4bf5881f77192fbdf74057f1a5026032c0f325fe1da9486a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:06.021906 kubelet[2728]: E0904 04:27:06.021690 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e1fb2851c5889b4bf5881f77192fbdf74057f1a5026032c0f325fe1da9486a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7676c47945-x89mv" Sep 4 04:27:06.021906 kubelet[2728]: E0904 04:27:06.021718 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e1fb2851c5889b4bf5881f77192fbdf74057f1a5026032c0f325fe1da9486a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7676c47945-x89mv" Sep 4 04:27:06.022062 kubelet[2728]: E0904 04:27:06.021778 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7676c47945-x89mv_calico-apiserver(9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7676c47945-x89mv_calico-apiserver(9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09e1fb2851c5889b4bf5881f77192fbdf74057f1a5026032c0f325fe1da9486a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7676c47945-x89mv" podUID="9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f" Sep 4 04:27:06.052738 systemd[1]: run-netns-cni\x2d84a9ca88\x2dc9d6\x2d46f7\x2d5e95\x2dbd3911e53f4c.mount: Deactivated successfully. Sep 4 04:27:06.055732 containerd[1612]: time="2025-09-04T04:27:06.055424390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d976b6dcd-xfldh,Uid:db01949a-0c6e-4bba-af1d-bcb364ba4424,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"917f124a72c100c7e284d453019dc46bbb92d72c71ae522ec4b8faaac12c2bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:06.057867 kubelet[2728]: E0904 04:27:06.057664 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917f124a72c100c7e284d453019dc46bbb92d72c71ae522ec4b8faaac12c2bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:06.058033 kubelet[2728]: E0904 04:27:06.058006 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917f124a72c100c7e284d453019dc46bbb92d72c71ae522ec4b8faaac12c2bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d976b6dcd-xfldh" Sep 4 04:27:06.058192 kubelet[2728]: E0904 04:27:06.058124 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917f124a72c100c7e284d453019dc46bbb92d72c71ae522ec4b8faaac12c2bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d976b6dcd-xfldh" Sep 4 04:27:06.058361 kubelet[2728]: E0904 04:27:06.058267 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d976b6dcd-xfldh_calico-system(db01949a-0c6e-4bba-af1d-bcb364ba4424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d976b6dcd-xfldh_calico-system(db01949a-0c6e-4bba-af1d-bcb364ba4424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"917f124a72c100c7e284d453019dc46bbb92d72c71ae522ec4b8faaac12c2bc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d976b6dcd-xfldh" podUID="db01949a-0c6e-4bba-af1d-bcb364ba4424" Sep 4 04:27:06.123665 containerd[1612]: time="2025-09-04T04:27:06.123365254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 04:27:06.125652 containerd[1612]: time="2025-09-04T04:27:06.125481128Z" level=error msg="Failed to destroy network for sandbox \"8749ea5609f99ed4d6cf2c689b1bf7dab6586e590c048039d95c15198dd58aa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:06.128920 systemd[1]: run-netns-cni\x2dcb45b4ae\x2da9e3\x2ddeb1\x2dbc0f\x2dad990fd6706d.mount: Deactivated successfully. Sep 4 04:27:06.130053 containerd[1612]: time="2025-09-04T04:27:06.129979986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhrnv,Uid:471e8a18-1211-4eda-b37a-0a76de6d8f44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8749ea5609f99ed4d6cf2c689b1bf7dab6586e590c048039d95c15198dd58aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:06.130526 kubelet[2728]: E0904 04:27:06.130475 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8749ea5609f99ed4d6cf2c689b1bf7dab6586e590c048039d95c15198dd58aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:06.130526 kubelet[2728]: E0904 04:27:06.130538 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8749ea5609f99ed4d6cf2c689b1bf7dab6586e590c048039d95c15198dd58aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhrnv" Sep 4 04:27:06.131086 kubelet[2728]: E0904 04:27:06.130563 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8749ea5609f99ed4d6cf2c689b1bf7dab6586e590c048039d95c15198dd58aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhrnv" Sep 4 04:27:06.131086 kubelet[2728]: E0904 04:27:06.130610 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bhrnv_calico-system(471e8a18-1211-4eda-b37a-0a76de6d8f44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bhrnv_calico-system(471e8a18-1211-4eda-b37a-0a76de6d8f44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8749ea5609f99ed4d6cf2c689b1bf7dab6586e590c048039d95c15198dd58aa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bhrnv" podUID="471e8a18-1211-4eda-b37a-0a76de6d8f44" Sep 4 04:27:15.974147 containerd[1612]: time="2025-09-04T04:27:15.974074333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-gdttk,Uid:41c81eb9-036f-44f8-aa46-f56bea288457,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:27:17.773626 kubelet[2728]: E0904 04:27:17.773265 2728 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.788s" Sep 4 04:27:17.774115 kubelet[2728]: E0904 04:27:17.773846 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:17.774227 kubelet[2728]: E0904 04:27:17.774193 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:17.774644 containerd[1612]: time="2025-09-04T04:27:17.774554176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854984d5c7-kpvmr,Uid:70ee29dd-1950-4e60-940b-ea22b976f88f,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:17.775002 containerd[1612]: time="2025-09-04T04:27:17.774729174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlfrf,Uid:ceca6a0d-4411-4bf5-9886-bc8bec807f34,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:17.775478 containerd[1612]: time="2025-09-04T04:27:17.775429977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncsgv,Uid:fc310726-94cb-4b24-8d5b-a83ff5c37bf9,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:18.385041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370982347.mount: Deactivated successfully. Sep 4 04:27:18.807715 containerd[1612]: time="2025-09-04T04:27:18.807565457Z" level=error msg="Failed to destroy network for sandbox \"1a077f5786a22ce58514eeca51e3d0a76f06ddd240926a6c9ddb6159a3e74e87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.811424 systemd[1]: run-netns-cni\x2dc47c12b6\x2d45e5\x2d84fa\x2d2b6e\x2da64dc11c513d.mount: Deactivated successfully. Sep 4 04:27:18.819968 containerd[1612]: time="2025-09-04T04:27:18.819885943Z" level=error msg="Failed to destroy network for sandbox \"ec344c92532ddaf87b7f40dd8bf9888d1bcb64c2d1ac128baea52826ef88b3b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.826075 systemd[1]: run-netns-cni\x2dec70b0f3\x2da00f\x2d8600\x2d915d\x2dc6cca5aac1a0.mount: Deactivated successfully. Sep 4 04:27:18.832601 containerd[1612]: time="2025-09-04T04:27:18.832522921Z" level=error msg="Failed to destroy network for sandbox \"abf0d06dc4b633aa0cae10fdfe345bfae7d014aef5944252b565ae1087edea75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.835196 systemd[1]: run-netns-cni\x2d5fd5fad8\x2d213e\x2d5a41\x2dba92\x2d7c8aa01aab9c.mount: Deactivated successfully. Sep 4 04:27:18.842592 containerd[1612]: time="2025-09-04T04:27:18.842512390Z" level=error msg="Failed to destroy network for sandbox \"1c2780f69337f0ed57f590acace9f25b172647592309d877eb0d31b4361b2a59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.844974 systemd[1]: run-netns-cni\x2ddf17af38\x2d9646\x2dee09\x2dca95\x2d19d2192a9139.mount: Deactivated successfully. Sep 4 04:27:18.924325 containerd[1612]: time="2025-09-04T04:27:18.924225690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:18.962615 containerd[1612]: time="2025-09-04T04:27:18.962540303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-gdttk,Uid:41c81eb9-036f-44f8-aa46-f56bea288457,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a077f5786a22ce58514eeca51e3d0a76f06ddd240926a6c9ddb6159a3e74e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.962991 kubelet[2728]: E0904 04:27:18.962917 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a077f5786a22ce58514eeca51e3d0a76f06ddd240926a6c9ddb6159a3e74e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.963547 kubelet[2728]: E0904 04:27:18.963038 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a077f5786a22ce58514eeca51e3d0a76f06ddd240926a6c9ddb6159a3e74e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" Sep 4 04:27:18.963547 kubelet[2728]: E0904 04:27:18.963067 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a077f5786a22ce58514eeca51e3d0a76f06ddd240926a6c9ddb6159a3e74e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" Sep 4 04:27:18.963547 kubelet[2728]: E0904 04:27:18.963438 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7676c47945-gdttk_calico-apiserver(41c81eb9-036f-44f8-aa46-f56bea288457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7676c47945-gdttk_calico-apiserver(41c81eb9-036f-44f8-aa46-f56bea288457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a077f5786a22ce58514eeca51e3d0a76f06ddd240926a6c9ddb6159a3e74e87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" podUID="41c81eb9-036f-44f8-aa46-f56bea288457" Sep 4 04:27:18.964153 containerd[1612]: time="2025-09-04T04:27:18.964101699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854984d5c7-kpvmr,Uid:70ee29dd-1950-4e60-940b-ea22b976f88f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec344c92532ddaf87b7f40dd8bf9888d1bcb64c2d1ac128baea52826ef88b3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.964452 kubelet[2728]: E0904 04:27:18.964398 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec344c92532ddaf87b7f40dd8bf9888d1bcb64c2d1ac128baea52826ef88b3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.964541 kubelet[2728]: E0904 04:27:18.964455 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec344c92532ddaf87b7f40dd8bf9888d1bcb64c2d1ac128baea52826ef88b3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" Sep 4 04:27:18.964541 kubelet[2728]: E0904 04:27:18.964476 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec344c92532ddaf87b7f40dd8bf9888d1bcb64c2d1ac128baea52826ef88b3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" Sep 4 04:27:18.964616 kubelet[2728]: E0904 04:27:18.964532 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-854984d5c7-kpvmr_calico-system(70ee29dd-1950-4e60-940b-ea22b976f88f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-854984d5c7-kpvmr_calico-system(70ee29dd-1950-4e60-940b-ea22b976f88f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec344c92532ddaf87b7f40dd8bf9888d1bcb64c2d1ac128baea52826ef88b3b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" podUID="70ee29dd-1950-4e60-940b-ea22b976f88f" Sep 4 04:27:18.965995 containerd[1612]: time="2025-09-04T04:27:18.965944081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncsgv,Uid:fc310726-94cb-4b24-8d5b-a83ff5c37bf9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf0d06dc4b633aa0cae10fdfe345bfae7d014aef5944252b565ae1087edea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.966188 kubelet[2728]: E0904 04:27:18.966159 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf0d06dc4b633aa0cae10fdfe345bfae7d014aef5944252b565ae1087edea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.966248 kubelet[2728]: E0904 04:27:18.966200 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf0d06dc4b633aa0cae10fdfe345bfae7d014aef5944252b565ae1087edea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ncsgv" Sep 4 04:27:18.966248 kubelet[2728]: E0904 04:27:18.966220 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf0d06dc4b633aa0cae10fdfe345bfae7d014aef5944252b565ae1087edea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ncsgv" Sep 4 04:27:18.966348 kubelet[2728]: E0904 04:27:18.966260 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ncsgv_kube-system(fc310726-94cb-4b24-8d5b-a83ff5c37bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ncsgv_kube-system(fc310726-94cb-4b24-8d5b-a83ff5c37bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abf0d06dc4b633aa0cae10fdfe345bfae7d014aef5944252b565ae1087edea75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ncsgv" podUID="fc310726-94cb-4b24-8d5b-a83ff5c37bf9" Sep 4 04:27:18.967918 containerd[1612]: time="2025-09-04T04:27:18.967851395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlfrf,Uid:ceca6a0d-4411-4bf5-9886-bc8bec807f34,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2780f69337f0ed57f590acace9f25b172647592309d877eb0d31b4361b2a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.968204 kubelet[2728]: E0904 04:27:18.968111 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2780f69337f0ed57f590acace9f25b172647592309d877eb0d31b4361b2a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:18.968204 kubelet[2728]: E0904 04:27:18.968165 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2780f69337f0ed57f590acace9f25b172647592309d877eb0d31b4361b2a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlfrf" Sep 4 04:27:18.968204 kubelet[2728]: E0904 04:27:18.968185 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2780f69337f0ed57f590acace9f25b172647592309d877eb0d31b4361b2a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlfrf" Sep 4 04:27:18.968382 kubelet[2728]: E0904 04:27:18.968231 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hlfrf_kube-system(ceca6a0d-4411-4bf5-9886-bc8bec807f34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hlfrf_kube-system(ceca6a0d-4411-4bf5-9886-bc8bec807f34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c2780f69337f0ed57f590acace9f25b172647592309d877eb0d31b4361b2a59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hlfrf" podUID="ceca6a0d-4411-4bf5-9886-bc8bec807f34" Sep 4 04:27:18.970528 containerd[1612]: time="2025-09-04T04:27:18.970482415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 04:27:18.973308 containerd[1612]: time="2025-09-04T04:27:18.972916936Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:18.974305 containerd[1612]: time="2025-09-04T04:27:18.973503736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kfp9d,Uid:43cb4990-cf2d-4baf-b7d5-c875eeaa23e6,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:18.974305 containerd[1612]: time="2025-09-04T04:27:18.973954100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d976b6dcd-xfldh,Uid:db01949a-0c6e-4bba-af1d-bcb364ba4424,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:18.978022 containerd[1612]: time="2025-09-04T04:27:18.977960567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:18.981360 containerd[1612]: time="2025-09-04T04:27:18.979216039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 12.855793167s" Sep 4 04:27:18.981360 containerd[1612]: time="2025-09-04T04:27:18.979262607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 04:27:18.998566 containerd[1612]: time="2025-09-04T04:27:18.998495705Z" level=info msg="CreateContainer within sandbox \"ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 04:27:19.046424 containerd[1612]: time="2025-09-04T04:27:19.045642879Z" level=info msg="Container 18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:19.064343 containerd[1612]: time="2025-09-04T04:27:19.063877266Z" level=info msg="CreateContainer within sandbox \"ec8f4fd9e37eb00e7ca93b1ad32fb18a8b3f8cb732f853402ff19638e114ccbf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8\"" Sep 4 04:27:19.066421 containerd[1612]: time="2025-09-04T04:27:19.065916899Z" level=info msg="StartContainer for \"18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8\"" Sep 4 04:27:19.068065 containerd[1612]: time="2025-09-04T04:27:19.068041840Z" level=info msg="connecting to shim 18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8" address="unix:///run/containerd/s/6ec5065d0c58e402a6e07b54cb1d48f7494fece9b96e1324524f32b7fe86533e" protocol=ttrpc version=3 Sep 4 04:27:19.076106 containerd[1612]: time="2025-09-04T04:27:19.075963644Z" level=error msg="Failed to destroy network for sandbox \"6a7d763e31d99b72a5fd4d9e2ff671b579af47a6fee48c30c94bc69a30f243da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:19.078405 containerd[1612]: time="2025-09-04T04:27:19.078349114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kfp9d,Uid:43cb4990-cf2d-4baf-b7d5-c875eeaa23e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7d763e31d99b72a5fd4d9e2ff671b579af47a6fee48c30c94bc69a30f243da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:19.078712 kubelet[2728]: E0904 04:27:19.078631 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7d763e31d99b72a5fd4d9e2ff671b579af47a6fee48c30c94bc69a30f243da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:19.078794 kubelet[2728]: E0904 04:27:19.078717 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7d763e31d99b72a5fd4d9e2ff671b579af47a6fee48c30c94bc69a30f243da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:19.078794 kubelet[2728]: E0904 04:27:19.078747 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7d763e31d99b72a5fd4d9e2ff671b579af47a6fee48c30c94bc69a30f243da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-kfp9d" Sep 4 04:27:19.079170 kubelet[2728]: E0904 04:27:19.078821 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-kfp9d_calico-system(43cb4990-cf2d-4baf-b7d5-c875eeaa23e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-kfp9d_calico-system(43cb4990-cf2d-4baf-b7d5-c875eeaa23e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a7d763e31d99b72a5fd4d9e2ff671b579af47a6fee48c30c94bc69a30f243da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-kfp9d" podUID="43cb4990-cf2d-4baf-b7d5-c875eeaa23e6" Sep 4 04:27:19.092587 containerd[1612]: time="2025-09-04T04:27:19.092491153Z" level=error msg="Failed to destroy network for sandbox \"30b27b35f145ab4cc28e105dbffe65f2d7a638762d0e4e470aee8ed0585a4588\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:19.094263 containerd[1612]: time="2025-09-04T04:27:19.094210475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d976b6dcd-xfldh,Uid:db01949a-0c6e-4bba-af1d-bcb364ba4424,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b27b35f145ab4cc28e105dbffe65f2d7a638762d0e4e470aee8ed0585a4588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:19.094620 kubelet[2728]: E0904 04:27:19.094567 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b27b35f145ab4cc28e105dbffe65f2d7a638762d0e4e470aee8ed0585a4588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 04:27:19.094690 kubelet[2728]: E0904 04:27:19.094649 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b27b35f145ab4cc28e105dbffe65f2d7a638762d0e4e470aee8ed0585a4588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d976b6dcd-xfldh" Sep 4 04:27:19.094690 kubelet[2728]: E0904 04:27:19.094676 2728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b27b35f145ab4cc28e105dbffe65f2d7a638762d0e4e470aee8ed0585a4588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d976b6dcd-xfldh" Sep 4 04:27:19.094767 kubelet[2728]: E0904 04:27:19.094738 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d976b6dcd-xfldh_calico-system(db01949a-0c6e-4bba-af1d-bcb364ba4424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d976b6dcd-xfldh_calico-system(db01949a-0c6e-4bba-af1d-bcb364ba4424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30b27b35f145ab4cc28e105dbffe65f2d7a638762d0e4e470aee8ed0585a4588\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d976b6dcd-xfldh" podUID="db01949a-0c6e-4bba-af1d-bcb364ba4424" Sep 4 04:27:19.106603 systemd[1]: Started cri-containerd-18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8.scope - libcontainer container 18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8. Sep 4 04:27:19.183188 containerd[1612]: time="2025-09-04T04:27:19.183133097Z" level=info msg="StartContainer for \"18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8\" returns successfully" Sep 4 04:27:19.271811 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 04:27:19.272680 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 04:27:19.781000 kubelet[2728]: I0904 04:27:19.780911 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wnrkv" podStartSLOduration=2.151305018 podStartE2EDuration="28.780881845s" podCreationTimestamp="2025-09-04 04:26:51 +0000 UTC" firstStartedPulling="2025-09-04 04:26:52.350592318 +0000 UTC m=+20.615501962" lastFinishedPulling="2025-09-04 04:27:18.980169145 +0000 UTC m=+47.245078789" observedRunningTime="2025-09-04 04:27:19.78032917 +0000 UTC m=+48.045238814" watchObservedRunningTime="2025-09-04 04:27:19.780881845 +0000 UTC m=+48.045791489" Sep 4 04:27:19.814193 systemd[1]: run-netns-cni\x2d441032b1\x2d9cdf\x2dcf89\x2d18e1\x2d96d4794ee590.mount: Deactivated successfully. Sep 4 04:27:19.878473 kubelet[2728]: I0904 04:27:19.878379 2728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-ca-bundle\") pod \"db01949a-0c6e-4bba-af1d-bcb364ba4424\" (UID: \"db01949a-0c6e-4bba-af1d-bcb364ba4424\") " Sep 4 04:27:19.878473 kubelet[2728]: I0904 04:27:19.878467 2728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-backend-key-pair\") pod \"db01949a-0c6e-4bba-af1d-bcb364ba4424\" (UID: \"db01949a-0c6e-4bba-af1d-bcb364ba4424\") " Sep 4 04:27:19.878701 kubelet[2728]: I0904 04:27:19.878503 2728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4bgk\" (UniqueName: \"kubernetes.io/projected/db01949a-0c6e-4bba-af1d-bcb364ba4424-kube-api-access-b4bgk\") pod \"db01949a-0c6e-4bba-af1d-bcb364ba4424\" (UID: \"db01949a-0c6e-4bba-af1d-bcb364ba4424\") " Sep 4 04:27:19.879908 kubelet[2728]: I0904 04:27:19.879851 2728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "db01949a-0c6e-4bba-af1d-bcb364ba4424" (UID: "db01949a-0c6e-4bba-af1d-bcb364ba4424"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 04:27:19.887997 kubelet[2728]: I0904 04:27:19.887926 2728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "db01949a-0c6e-4bba-af1d-bcb364ba4424" (UID: "db01949a-0c6e-4bba-af1d-bcb364ba4424"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 04:27:19.888077 systemd[1]: var-lib-kubelet-pods-db01949a\x2d0c6e\x2d4bba\x2daf1d\x2dbcb364ba4424-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4bgk.mount: Deactivated successfully. Sep 4 04:27:19.890016 kubelet[2728]: I0904 04:27:19.889573 2728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db01949a-0c6e-4bba-af1d-bcb364ba4424-kube-api-access-b4bgk" (OuterVolumeSpecName: "kube-api-access-b4bgk") pod "db01949a-0c6e-4bba-af1d-bcb364ba4424" (UID: "db01949a-0c6e-4bba-af1d-bcb364ba4424"). InnerVolumeSpecName "kube-api-access-b4bgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 04:27:19.891971 systemd[1]: var-lib-kubelet-pods-db01949a\x2d0c6e\x2d4bba\x2daf1d\x2dbcb364ba4424-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 04:27:19.946868 containerd[1612]: time="2025-09-04T04:27:19.946819187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8\" id:\"dbb5e92676cbe46fea85a00d85f135db2c69dfebdf356b732bed37ec5dbd4862\" pid:4119 exit_status:1 exited_at:{seconds:1756960039 nanos:946373352}" Sep 4 04:27:19.979456 kubelet[2728]: I0904 04:27:19.979352 2728 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 4 04:27:19.980158 kubelet[2728]: I0904 04:27:19.980117 2728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4bgk\" (UniqueName: \"kubernetes.io/projected/db01949a-0c6e-4bba-af1d-bcb364ba4424-kube-api-access-b4bgk\") on node \"localhost\" DevicePath \"\"" Sep 4 04:27:19.980158 kubelet[2728]: I0904 04:27:19.980141 2728 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db01949a-0c6e-4bba-af1d-bcb364ba4424-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 04:27:19.991781 systemd[1]: Removed slice kubepods-besteffort-poddb01949a_0c6e_4bba_af1d_bcb364ba4424.slice - libcontainer container kubepods-besteffort-poddb01949a_0c6e_4bba_af1d_bcb364ba4424.slice. Sep 4 04:27:20.845800 systemd[1]: Created slice kubepods-besteffort-pod82a2a342_141b_4e40_9156_f0a5f5ef1bfa.slice - libcontainer container kubepods-besteffort-pod82a2a342_141b_4e40_9156_f0a5f5ef1bfa.slice. Sep 4 04:27:20.940130 containerd[1612]: time="2025-09-04T04:27:20.940062749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8\" id:\"50585cc2b863fc35d1cb26779e1a38058f6c9653a30fba3c365ad84c77c26845\" pid:4250 exit_status:1 exited_at:{seconds:1756960040 nanos:939264473}" Sep 4 04:27:20.988547 kubelet[2728]: I0904 04:27:20.988456 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82a2a342-141b-4e40-9156-f0a5f5ef1bfa-whisker-backend-key-pair\") pod \"whisker-6db7d976d9-x92pk\" (UID: \"82a2a342-141b-4e40-9156-f0a5f5ef1bfa\") " pod="calico-system/whisker-6db7d976d9-x92pk" Sep 4 04:27:20.988547 kubelet[2728]: I0904 04:27:20.988525 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snqnp\" (UniqueName: \"kubernetes.io/projected/82a2a342-141b-4e40-9156-f0a5f5ef1bfa-kube-api-access-snqnp\") pod \"whisker-6db7d976d9-x92pk\" (UID: \"82a2a342-141b-4e40-9156-f0a5f5ef1bfa\") " pod="calico-system/whisker-6db7d976d9-x92pk" Sep 4 04:27:20.988547 kubelet[2728]: I0904 04:27:20.988546 2728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82a2a342-141b-4e40-9156-f0a5f5ef1bfa-whisker-ca-bundle\") pod \"whisker-6db7d976d9-x92pk\" (UID: \"82a2a342-141b-4e40-9156-f0a5f5ef1bfa\") " pod="calico-system/whisker-6db7d976d9-x92pk" Sep 4 04:27:21.155409 containerd[1612]: time="2025-09-04T04:27:21.155227294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db7d976d9-x92pk,Uid:82a2a342-141b-4e40-9156-f0a5f5ef1bfa,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:21.250751 systemd-networkd[1492]: vxlan.calico: Link UP Sep 4 04:27:21.250761 systemd-networkd[1492]: vxlan.calico: Gained carrier Sep 4 04:27:21.433985 systemd-networkd[1492]: calid8642fd6dc1: Link UP Sep 4 04:27:21.435371 systemd-networkd[1492]: calid8642fd6dc1: Gained carrier Sep 4 04:27:21.451420 containerd[1612]: 2025-09-04 04:27:21.310 [INFO][4307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6db7d976d9--x92pk-eth0 whisker-6db7d976d9- calico-system 82a2a342-141b-4e40-9156-f0a5f5ef1bfa 969 0 2025-09-04 04:27:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6db7d976d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6db7d976d9-x92pk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid8642fd6dc1 [] [] }} ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-" Sep 4 04:27:21.451420 containerd[1612]: 2025-09-04 04:27:21.310 [INFO][4307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" Sep 4 04:27:21.451420 containerd[1612]: 2025-09-04 04:27:21.384 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" HandleID="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Workload="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.385 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" HandleID="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Workload="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d09b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6db7d976d9-x92pk", "timestamp":"2025-09-04 04:27:21.384876021 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.385 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.386 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.386 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.396 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" host="localhost" Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.404 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.408 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.410 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.412 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:21.451663 containerd[1612]: 2025-09-04 04:27:21.412 [INFO][4344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" host="localhost" Sep 4 04:27:21.451913 containerd[1612]: 2025-09-04 04:27:21.414 [INFO][4344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50 Sep 4 04:27:21.451913 containerd[1612]: 2025-09-04 04:27:21.418 [INFO][4344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" host="localhost" Sep 4 04:27:21.451913 containerd[1612]: 2025-09-04 04:27:21.424 [INFO][4344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" host="localhost" Sep 4 04:27:21.451913 containerd[1612]: 2025-09-04 04:27:21.424 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" host="localhost" Sep 4 04:27:21.451913 containerd[1612]: 2025-09-04 04:27:21.424 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:21.451913 containerd[1612]: 2025-09-04 04:27:21.424 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" HandleID="k8s-pod-network.cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Workload="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" Sep 4 04:27:21.452038 containerd[1612]: 2025-09-04 04:27:21.428 [INFO][4307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6db7d976d9--x92pk-eth0", GenerateName:"whisker-6db7d976d9-", Namespace:"calico-system", SelfLink:"", UID:"82a2a342-141b-4e40-9156-f0a5f5ef1bfa", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 27, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6db7d976d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6db7d976d9-x92pk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8642fd6dc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:21.452038 containerd[1612]: 2025-09-04 04:27:21.428 [INFO][4307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" Sep 4 04:27:21.452112 containerd[1612]: 2025-09-04 04:27:21.428 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8642fd6dc1 ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" Sep 4 04:27:21.452112 containerd[1612]: 2025-09-04 04:27:21.436 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" Sep 4 04:27:21.452157 containerd[1612]: 2025-09-04 04:27:21.436 [INFO][4307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6db7d976d9--x92pk-eth0", GenerateName:"whisker-6db7d976d9-", Namespace:"calico-system", SelfLink:"", UID:"82a2a342-141b-4e40-9156-f0a5f5ef1bfa", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 27, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6db7d976d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50", Pod:"whisker-6db7d976d9-x92pk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8642fd6dc1", MAC:"f2:6c:3c:76:74:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:21.452208 containerd[1612]: 2025-09-04 04:27:21.446 [INFO][4307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" Namespace="calico-system" Pod="whisker-6db7d976d9-x92pk" WorkloadEndpoint="localhost-k8s-whisker--6db7d976d9--x92pk-eth0" Sep 4 04:27:21.570520 containerd[1612]: time="2025-09-04T04:27:21.570458134Z" level=info msg="connecting to shim cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50" address="unix:///run/containerd/s/03795ff49590eb777e169195a1567bb5b45be1780197a91c5afa71f8699c840f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:21.606527 systemd[1]: Started cri-containerd-cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50.scope - libcontainer container cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50. Sep 4 04:27:21.628309 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:21.670871 containerd[1612]: time="2025-09-04T04:27:21.670818766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db7d976d9-x92pk,Uid:82a2a342-141b-4e40-9156-f0a5f5ef1bfa,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50\"" Sep 4 04:27:21.677521 containerd[1612]: time="2025-09-04T04:27:21.677480258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 04:27:21.973381 containerd[1612]: time="2025-09-04T04:27:21.973321531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhrnv,Uid:471e8a18-1211-4eda-b37a-0a76de6d8f44,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:21.973615 containerd[1612]: time="2025-09-04T04:27:21.973321491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-x89mv,Uid:9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:27:21.975249 kubelet[2728]: I0904 04:27:21.975208 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db01949a-0c6e-4bba-af1d-bcb364ba4424" path="/var/lib/kubelet/pods/db01949a-0c6e-4bba-af1d-bcb364ba4424/volumes" Sep 4 04:27:22.140500 systemd-networkd[1492]: cali850c4a4d378: Link UP Sep 4 04:27:22.141236 systemd-networkd[1492]: cali850c4a4d378: Gained carrier Sep 4 04:27:22.155096 containerd[1612]: 2025-09-04 04:27:22.050 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bhrnv-eth0 csi-node-driver- calico-system 471e8a18-1211-4eda-b37a-0a76de6d8f44 754 0 2025-09-04 04:26:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bhrnv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali850c4a4d378 [] [] }} ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-" Sep 4 04:27:22.155096 containerd[1612]: 2025-09-04 04:27:22.050 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-eth0" Sep 4 04:27:22.155096 containerd[1612]: 2025-09-04 04:27:22.085 [INFO][4472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" HandleID="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Workload="localhost-k8s-csi--node--driver--bhrnv-eth0" Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.085 [INFO][4472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" HandleID="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Workload="localhost-k8s-csi--node--driver--bhrnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000524a60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bhrnv", "timestamp":"2025-09-04 04:27:22.085581381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.085 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.085 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.085 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.101 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" host="localhost" Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.108 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.114 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.115 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.118 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:22.155359 containerd[1612]: 2025-09-04 04:27:22.118 [INFO][4472] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" host="localhost" Sep 4 04:27:22.155922 containerd[1612]: 2025-09-04 04:27:22.119 [INFO][4472] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08 Sep 4 04:27:22.155922 containerd[1612]: 2025-09-04 04:27:22.125 [INFO][4472] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" host="localhost" Sep 4 04:27:22.155922 containerd[1612]: 2025-09-04 04:27:22.134 [INFO][4472] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" host="localhost" Sep 4 04:27:22.155922 containerd[1612]: 2025-09-04 04:27:22.134 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" host="localhost" Sep 4 04:27:22.155922 containerd[1612]: 2025-09-04 04:27:22.134 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:22.155922 containerd[1612]: 2025-09-04 04:27:22.134 [INFO][4472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" HandleID="k8s-pod-network.9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Workload="localhost-k8s-csi--node--driver--bhrnv-eth0" Sep 4 04:27:22.156055 containerd[1612]: 2025-09-04 04:27:22.137 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhrnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"471e8a18-1211-4eda-b37a-0a76de6d8f44", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bhrnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali850c4a4d378", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:22.156119 containerd[1612]: 2025-09-04 04:27:22.138 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-eth0" Sep 4 04:27:22.156119 containerd[1612]: 2025-09-04 04:27:22.138 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali850c4a4d378 ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-eth0" Sep 4 04:27:22.156119 containerd[1612]: 2025-09-04 04:27:22.140 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-eth0" Sep 4 04:27:22.156228 containerd[1612]: 2025-09-04 04:27:22.140 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhrnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"471e8a18-1211-4eda-b37a-0a76de6d8f44", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08", Pod:"csi-node-driver-bhrnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali850c4a4d378", MAC:"82:34:03:4b:cd:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:22.156368 containerd[1612]: 2025-09-04 04:27:22.151 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" Namespace="calico-system" Pod="csi-node-driver-bhrnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhrnv-eth0" Sep 4 04:27:22.251835 systemd-networkd[1492]: cali9d993d2cf1e: Link UP Sep 4 04:27:22.257020 systemd-networkd[1492]: cali9d993d2cf1e: Gained carrier Sep 4 04:27:22.279732 containerd[1612]: time="2025-09-04T04:27:22.279649461Z" level=info msg="connecting to shim 9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08" address="unix:///run/containerd/s/667691ecf5ef60fc15430275040d91886742e3f53a9c79d174605496358aa96e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:22.285802 containerd[1612]: 2025-09-04 04:27:22.062 [INFO][4456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0 calico-apiserver-7676c47945- calico-apiserver 9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f 878 0 2025-09-04 04:26:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7676c47945 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7676c47945-x89mv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d993d2cf1e [] [] }} ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-" Sep 4 04:27:22.285802 containerd[1612]: 2025-09-04 04:27:22.062 [INFO][4456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" Sep 4 04:27:22.285802 containerd[1612]: 2025-09-04 04:27:22.102 [INFO][4480] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" HandleID="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Workload="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.102 [INFO][4480] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" HandleID="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Workload="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7676c47945-x89mv", "timestamp":"2025-09-04 04:27:22.102250777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.102 [INFO][4480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.135 [INFO][4480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.135 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.200 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" host="localhost" Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.216 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.221 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.224 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.226 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:22.286231 containerd[1612]: 2025-09-04 04:27:22.227 [INFO][4480] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" host="localhost" Sep 4 04:27:22.286547 containerd[1612]: 2025-09-04 04:27:22.229 [INFO][4480] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3 Sep 4 04:27:22.286547 containerd[1612]: 2025-09-04 04:27:22.235 [INFO][4480] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" host="localhost" Sep 4 04:27:22.286547 containerd[1612]: 2025-09-04 04:27:22.243 [INFO][4480] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" host="localhost" Sep 4 04:27:22.286547 containerd[1612]: 2025-09-04 04:27:22.243 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" host="localhost" Sep 4 04:27:22.286547 containerd[1612]: 2025-09-04 04:27:22.243 [INFO][4480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:22.286547 containerd[1612]: 2025-09-04 04:27:22.243 [INFO][4480] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" HandleID="k8s-pod-network.64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Workload="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" Sep 4 04:27:22.286708 containerd[1612]: 2025-09-04 04:27:22.247 [INFO][4456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0", GenerateName:"calico-apiserver-7676c47945-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7676c47945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7676c47945-x89mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d993d2cf1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:22.286805 containerd[1612]: 2025-09-04 04:27:22.247 [INFO][4456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" Sep 4 04:27:22.286805 containerd[1612]: 2025-09-04 04:27:22.247 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d993d2cf1e ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" Sep 4 04:27:22.286805 containerd[1612]: 2025-09-04 04:27:22.258 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" Sep 4 04:27:22.286897 containerd[1612]: 2025-09-04 04:27:22.258 [INFO][4456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0", GenerateName:"calico-apiserver-7676c47945-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7676c47945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3", Pod:"calico-apiserver-7676c47945-x89mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d993d2cf1e", MAC:"52:82:6d:ce:c4:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:22.286969 containerd[1612]: 2025-09-04 04:27:22.274 [INFO][4456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-x89mv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--x89mv-eth0" Sep 4 04:27:22.317591 systemd[1]: Started cri-containerd-9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08.scope - libcontainer container 9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08. Sep 4 04:27:22.347682 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:22.360630 containerd[1612]: time="2025-09-04T04:27:22.360556942Z" level=info msg="connecting to shim 64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3" address="unix:///run/containerd/s/1943f98cf56d53de2129131546a19fc9538b06c14370e72d1db7c76842c0dea5" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:22.400986 containerd[1612]: time="2025-09-04T04:27:22.400079603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhrnv,Uid:471e8a18-1211-4eda-b37a-0a76de6d8f44,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08\"" Sep 4 04:27:22.422593 systemd[1]: Started cri-containerd-64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3.scope - libcontainer container 64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3. Sep 4 04:27:22.444942 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:22.489247 containerd[1612]: time="2025-09-04T04:27:22.489194734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-x89mv,Uid:9a23b40c-a5ee-4ca1-98cf-6891fa20aa3f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3\"" Sep 4 04:27:22.534611 systemd-networkd[1492]: vxlan.calico: Gained IPv6LL Sep 4 04:27:23.238614 systemd-networkd[1492]: calid8642fd6dc1: Gained IPv6LL Sep 4 04:27:23.366604 systemd-networkd[1492]: cali850c4a4d378: Gained IPv6LL Sep 4 04:27:23.573520 containerd[1612]: time="2025-09-04T04:27:23.573242985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:23.591316 containerd[1612]: time="2025-09-04T04:27:23.591182382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 04:27:23.593263 containerd[1612]: time="2025-09-04T04:27:23.593185055Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:23.596428 containerd[1612]: time="2025-09-04T04:27:23.596330960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:23.597361 containerd[1612]: time="2025-09-04T04:27:23.597273426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.919738134s" Sep 4 04:27:23.597434 containerd[1612]: time="2025-09-04T04:27:23.597357954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 04:27:23.598916 containerd[1612]: time="2025-09-04T04:27:23.598855470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 04:27:23.600363 containerd[1612]: time="2025-09-04T04:27:23.600324704Z" level=info msg="CreateContainer within sandbox \"cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 04:27:23.616734 containerd[1612]: time="2025-09-04T04:27:23.616629147Z" level=info msg="Container d7c930d718c5645335384ed7fed091efd3e002ce2172b055871ffa10a92d6b09: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:23.636078 containerd[1612]: time="2025-09-04T04:27:23.635991450Z" level=info msg="CreateContainer within sandbox \"cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d7c930d718c5645335384ed7fed091efd3e002ce2172b055871ffa10a92d6b09\"" Sep 4 04:27:23.636797 containerd[1612]: time="2025-09-04T04:27:23.636743499Z" level=info msg="StartContainer for \"d7c930d718c5645335384ed7fed091efd3e002ce2172b055871ffa10a92d6b09\"" Sep 4 04:27:23.638219 containerd[1612]: time="2025-09-04T04:27:23.638163239Z" level=info msg="connecting to shim d7c930d718c5645335384ed7fed091efd3e002ce2172b055871ffa10a92d6b09" address="unix:///run/containerd/s/03795ff49590eb777e169195a1567bb5b45be1780197a91c5afa71f8699c840f" protocol=ttrpc version=3 Sep 4 04:27:23.672582 systemd[1]: Started cri-containerd-d7c930d718c5645335384ed7fed091efd3e002ce2172b055871ffa10a92d6b09.scope - libcontainer container d7c930d718c5645335384ed7fed091efd3e002ce2172b055871ffa10a92d6b09. Sep 4 04:27:23.731909 containerd[1612]: time="2025-09-04T04:27:23.731843007Z" level=info msg="StartContainer for \"d7c930d718c5645335384ed7fed091efd3e002ce2172b055871ffa10a92d6b09\" returns successfully" Sep 4 04:27:24.070657 systemd-networkd[1492]: cali9d993d2cf1e: Gained IPv6LL Sep 4 04:27:25.915730 containerd[1612]: time="2025-09-04T04:27:25.915634840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:25.917610 containerd[1612]: time="2025-09-04T04:27:25.917464779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 04:27:25.919807 containerd[1612]: time="2025-09-04T04:27:25.919699247Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:25.922306 containerd[1612]: time="2025-09-04T04:27:25.922247773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:25.923097 containerd[1612]: time="2025-09-04T04:27:25.923029909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.324122862s" Sep 4 04:27:25.923097 containerd[1612]: time="2025-09-04T04:27:25.923081506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 04:27:25.924720 containerd[1612]: time="2025-09-04T04:27:25.924210781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 04:27:25.926110 containerd[1612]: time="2025-09-04T04:27:25.926038527Z" level=info msg="CreateContainer within sandbox \"9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 04:27:25.941332 containerd[1612]: time="2025-09-04T04:27:25.941250534Z" level=info msg="Container 66e8ded539aa2b7cab5e627ee1df41db9a706ee8f7dd5bdf227af97aa24affc7: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:25.973739 containerd[1612]: time="2025-09-04T04:27:25.973681628Z" level=info msg="CreateContainer within sandbox \"9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"66e8ded539aa2b7cab5e627ee1df41db9a706ee8f7dd5bdf227af97aa24affc7\"" Sep 4 04:27:25.974596 containerd[1612]: time="2025-09-04T04:27:25.974306108Z" level=info msg="StartContainer for \"66e8ded539aa2b7cab5e627ee1df41db9a706ee8f7dd5bdf227af97aa24affc7\"" Sep 4 04:27:25.976210 containerd[1612]: time="2025-09-04T04:27:25.976156836Z" level=info msg="connecting to shim 66e8ded539aa2b7cab5e627ee1df41db9a706ee8f7dd5bdf227af97aa24affc7" address="unix:///run/containerd/s/667691ecf5ef60fc15430275040d91886742e3f53a9c79d174605496358aa96e" protocol=ttrpc version=3 Sep 4 04:27:26.001598 systemd[1]: Started cri-containerd-66e8ded539aa2b7cab5e627ee1df41db9a706ee8f7dd5bdf227af97aa24affc7.scope - libcontainer container 66e8ded539aa2b7cab5e627ee1df41db9a706ee8f7dd5bdf227af97aa24affc7. Sep 4 04:27:26.056721 containerd[1612]: time="2025-09-04T04:27:26.056671379Z" level=info msg="StartContainer for \"66e8ded539aa2b7cab5e627ee1df41db9a706ee8f7dd5bdf227af97aa24affc7\" returns successfully" Sep 4 04:27:29.483149 containerd[1612]: time="2025-09-04T04:27:29.483030878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:29.484379 containerd[1612]: time="2025-09-04T04:27:29.484329672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 04:27:29.486499 containerd[1612]: time="2025-09-04T04:27:29.486404771Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:29.489080 containerd[1612]: time="2025-09-04T04:27:29.489022085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:29.490008 containerd[1612]: time="2025-09-04T04:27:29.489942500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.565660976s" Sep 4 04:27:29.490070 containerd[1612]: time="2025-09-04T04:27:29.490000799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 04:27:29.491457 containerd[1612]: time="2025-09-04T04:27:29.491413577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 04:27:29.492759 containerd[1612]: time="2025-09-04T04:27:29.492700409Z" level=info msg="CreateContainer within sandbox \"64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 04:27:29.512538 containerd[1612]: time="2025-09-04T04:27:29.512403002Z" level=info msg="Container 384eb97df0b4953d557f639e0a1e8aa3b95391df218786b4ed2a619035c6e5b7: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:29.828484 containerd[1612]: time="2025-09-04T04:27:29.828326509Z" level=info msg="CreateContainer within sandbox \"64a5d8dd2f5f7ac53ed3b34a23dae5c8bc94db51caf191943749032218084ff3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"384eb97df0b4953d557f639e0a1e8aa3b95391df218786b4ed2a619035c6e5b7\"" Sep 4 04:27:29.829200 containerd[1612]: time="2025-09-04T04:27:29.829085722Z" level=info msg="StartContainer for \"384eb97df0b4953d557f639e0a1e8aa3b95391df218786b4ed2a619035c6e5b7\"" Sep 4 04:27:29.830304 containerd[1612]: time="2025-09-04T04:27:29.830249733Z" level=info msg="connecting to shim 384eb97df0b4953d557f639e0a1e8aa3b95391df218786b4ed2a619035c6e5b7" address="unix:///run/containerd/s/1943f98cf56d53de2129131546a19fc9538b06c14370e72d1db7c76842c0dea5" protocol=ttrpc version=3 Sep 4 04:27:29.859563 systemd[1]: Started cri-containerd-384eb97df0b4953d557f639e0a1e8aa3b95391df218786b4ed2a619035c6e5b7.scope - libcontainer container 384eb97df0b4953d557f639e0a1e8aa3b95391df218786b4ed2a619035c6e5b7. Sep 4 04:27:30.217775 containerd[1612]: time="2025-09-04T04:27:30.217724766Z" level=info msg="StartContainer for \"384eb97df0b4953d557f639e0a1e8aa3b95391df218786b4ed2a619035c6e5b7\" returns successfully" Sep 4 04:27:30.844263 kubelet[2728]: I0904 04:27:30.844187 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7676c47945-x89mv" podStartSLOduration=35.844219718 podStartE2EDuration="42.844163247s" podCreationTimestamp="2025-09-04 04:26:48 +0000 UTC" firstStartedPulling="2025-09-04 04:27:22.491086489 +0000 UTC m=+50.755996133" lastFinishedPulling="2025-09-04 04:27:29.491030018 +0000 UTC m=+57.755939662" observedRunningTime="2025-09-04 04:27:30.843872432 +0000 UTC m=+59.108782076" watchObservedRunningTime="2025-09-04 04:27:30.844163247 +0000 UTC m=+59.109072891" Sep 4 04:27:30.941125 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:49562.service - OpenSSH per-connection server daemon (10.0.0.1:49562). Sep 4 04:27:30.973259 containerd[1612]: time="2025-09-04T04:27:30.973175515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-gdttk,Uid:41c81eb9-036f-44f8-aa46-f56bea288457,Namespace:calico-apiserver,Attempt:0,}" Sep 4 04:27:31.069705 sshd[4739]: Accepted publickey for core from 10.0.0.1 port 49562 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:31.074619 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:31.082583 systemd-logind[1509]: New session 8 of user core. Sep 4 04:27:31.090609 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 04:27:31.132620 systemd-networkd[1492]: cali06381914f55: Link UP Sep 4 04:27:31.134649 systemd-networkd[1492]: cali06381914f55: Gained carrier Sep 4 04:27:31.159607 containerd[1612]: 2025-09-04 04:27:31.028 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0 calico-apiserver-7676c47945- calico-apiserver 41c81eb9-036f-44f8-aa46-f56bea288457 881 0 2025-09-04 04:26:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7676c47945 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7676c47945-gdttk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06381914f55 [] [] }} ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-" Sep 4 04:27:31.159607 containerd[1612]: 2025-09-04 04:27:31.028 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" Sep 4 04:27:31.159607 containerd[1612]: 2025-09-04 04:27:31.072 [INFO][4757] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" HandleID="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Workload="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.072 [INFO][4757] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" HandleID="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Workload="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001314f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7676c47945-gdttk", "timestamp":"2025-09-04 04:27:31.072001123 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.072 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.072 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.072 [INFO][4757] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.083 [INFO][4757] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" host="localhost" Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.088 [INFO][4757] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.098 [INFO][4757] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.101 [INFO][4757] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.105 [INFO][4757] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:31.159968 containerd[1612]: 2025-09-04 04:27:31.105 [INFO][4757] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" host="localhost" Sep 4 04:27:31.160303 containerd[1612]: 2025-09-04 04:27:31.109 [INFO][4757] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348 Sep 4 04:27:31.160303 containerd[1612]: 2025-09-04 04:27:31.116 [INFO][4757] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" host="localhost" Sep 4 04:27:31.160303 containerd[1612]: 2025-09-04 04:27:31.123 [INFO][4757] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" host="localhost" Sep 4 04:27:31.160303 containerd[1612]: 2025-09-04 04:27:31.123 [INFO][4757] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" host="localhost" Sep 4 04:27:31.160303 containerd[1612]: 2025-09-04 04:27:31.124 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:31.160303 containerd[1612]: 2025-09-04 04:27:31.124 [INFO][4757] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" HandleID="k8s-pod-network.ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Workload="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" Sep 4 04:27:31.160504 containerd[1612]: 2025-09-04 04:27:31.129 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0", GenerateName:"calico-apiserver-7676c47945-", Namespace:"calico-apiserver", SelfLink:"", UID:"41c81eb9-036f-44f8-aa46-f56bea288457", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7676c47945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7676c47945-gdttk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06381914f55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:31.160585 containerd[1612]: 2025-09-04 04:27:31.129 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" Sep 4 04:27:31.160585 containerd[1612]: 2025-09-04 04:27:31.129 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06381914f55 ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" Sep 4 04:27:31.160585 containerd[1612]: 2025-09-04 04:27:31.135 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" Sep 4 04:27:31.160675 containerd[1612]: 2025-09-04 04:27:31.136 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0", GenerateName:"calico-apiserver-7676c47945-", Namespace:"calico-apiserver", SelfLink:"", UID:"41c81eb9-036f-44f8-aa46-f56bea288457", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7676c47945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348", Pod:"calico-apiserver-7676c47945-gdttk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06381914f55", MAC:"ae:d6:26:1f:cf:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:31.160755 containerd[1612]: 2025-09-04 04:27:31.152 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" Namespace="calico-apiserver" Pod="calico-apiserver-7676c47945-gdttk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7676c47945--gdttk-eth0" Sep 4 04:27:31.204262 containerd[1612]: time="2025-09-04T04:27:31.204197912Z" level=info msg="connecting to shim ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348" address="unix:///run/containerd/s/6a158ca65a780ccf02b07389417530f4a2a670f3d7e7d8a3156cdab4a6c25d5d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:31.245551 systemd[1]: Started cri-containerd-ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348.scope - libcontainer container ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348. Sep 4 04:27:31.267208 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:31.389057 containerd[1612]: time="2025-09-04T04:27:31.388904770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7676c47945-gdttk,Uid:41c81eb9-036f-44f8-aa46-f56bea288457,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348\"" Sep 4 04:27:31.392828 containerd[1612]: time="2025-09-04T04:27:31.392766246Z" level=info msg="CreateContainer within sandbox \"ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 04:27:31.397081 sshd[4765]: Connection closed by 10.0.0.1 port 49562 Sep 4 04:27:31.398058 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:31.405604 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:49562.service: Deactivated successfully. Sep 4 04:27:31.407328 containerd[1612]: time="2025-09-04T04:27:31.407174439Z" level=info msg="Container e35faafeda9f563f750b4d8eb35a94eead9226bb047c61bc051dd7afff3867cc: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:31.412891 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 04:27:31.414342 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Sep 4 04:27:31.415932 systemd-logind[1509]: Removed session 8. Sep 4 04:27:31.418803 containerd[1612]: time="2025-09-04T04:27:31.418747398Z" level=info msg="CreateContainer within sandbox \"ea0834d503f593254c1bee39d7bdda3cfa63e2eced8b47f501cd5abcc6587348\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e35faafeda9f563f750b4d8eb35a94eead9226bb047c61bc051dd7afff3867cc\"" Sep 4 04:27:31.419347 containerd[1612]: time="2025-09-04T04:27:31.419301506Z" level=info msg="StartContainer for \"e35faafeda9f563f750b4d8eb35a94eead9226bb047c61bc051dd7afff3867cc\"" Sep 4 04:27:31.420514 containerd[1612]: time="2025-09-04T04:27:31.420462392Z" level=info msg="connecting to shim e35faafeda9f563f750b4d8eb35a94eead9226bb047c61bc051dd7afff3867cc" address="unix:///run/containerd/s/6a158ca65a780ccf02b07389417530f4a2a670f3d7e7d8a3156cdab4a6c25d5d" protocol=ttrpc version=3 Sep 4 04:27:31.452723 systemd[1]: Started cri-containerd-e35faafeda9f563f750b4d8eb35a94eead9226bb047c61bc051dd7afff3867cc.scope - libcontainer container e35faafeda9f563f750b4d8eb35a94eead9226bb047c61bc051dd7afff3867cc. Sep 4 04:27:31.521410 containerd[1612]: time="2025-09-04T04:27:31.521337922Z" level=info msg="StartContainer for \"e35faafeda9f563f750b4d8eb35a94eead9226bb047c61bc051dd7afff3867cc\" returns successfully" Sep 4 04:27:31.856167 kubelet[2728]: I0904 04:27:31.856041 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7676c47945-gdttk" podStartSLOduration=43.856014768 podStartE2EDuration="43.856014768s" podCreationTimestamp="2025-09-04 04:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:27:31.838768516 +0000 UTC m=+60.103678160" watchObservedRunningTime="2025-09-04 04:27:31.856014768 +0000 UTC m=+60.120924422" Sep 4 04:27:31.973932 kubelet[2728]: E0904 04:27:31.973877 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:31.974429 containerd[1612]: time="2025-09-04T04:27:31.974388230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncsgv,Uid:fc310726-94cb-4b24-8d5b-a83ff5c37bf9,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:31.975199 containerd[1612]: time="2025-09-04T04:27:31.975065409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kfp9d,Uid:43cb4990-cf2d-4baf-b7d5-c875eeaa23e6,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:31.977451 containerd[1612]: time="2025-09-04T04:27:31.977401587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854984d5c7-kpvmr,Uid:70ee29dd-1950-4e60-940b-ea22b976f88f,Namespace:calico-system,Attempt:0,}" Sep 4 04:27:32.516163 systemd-networkd[1492]: cali90b99a039a9: Link UP Sep 4 04:27:32.517428 systemd-networkd[1492]: cali90b99a039a9: Gained carrier Sep 4 04:27:32.835478 containerd[1612]: 2025-09-04 04:27:32.057 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--kfp9d-eth0 goldmane-7988f88666- calico-system 43cb4990-cf2d-4baf-b7d5-c875eeaa23e6 873 0 2025-09-04 04:26:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-kfp9d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali90b99a039a9 [] [] }} ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-" Sep 4 04:27:32.835478 containerd[1612]: 2025-09-04 04:27:32.057 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" Sep 4 04:27:32.835478 containerd[1612]: 2025-09-04 04:27:32.108 [INFO][4925] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" HandleID="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Workload="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.110 [INFO][4925] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" HandleID="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Workload="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-kfp9d", "timestamp":"2025-09-04 04:27:32.108338789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.110 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.110 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.110 [INFO][4925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.191 [INFO][4925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" host="localhost" Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.201 [INFO][4925] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.212 [INFO][4925] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.214 [INFO][4925] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.217 [INFO][4925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:32.835824 containerd[1612]: 2025-09-04 04:27:32.217 [INFO][4925] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" host="localhost" Sep 4 04:27:32.836082 containerd[1612]: 2025-09-04 04:27:32.219 [INFO][4925] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae Sep 4 04:27:32.836082 containerd[1612]: 2025-09-04 04:27:32.251 [INFO][4925] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" host="localhost" Sep 4 04:27:32.836082 containerd[1612]: 2025-09-04 04:27:32.510 [INFO][4925] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" host="localhost" Sep 4 04:27:32.836082 containerd[1612]: 2025-09-04 04:27:32.510 [INFO][4925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" host="localhost" Sep 4 04:27:32.836082 containerd[1612]: 2025-09-04 04:27:32.510 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:32.836082 containerd[1612]: 2025-09-04 04:27:32.510 [INFO][4925] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" HandleID="k8s-pod-network.ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Workload="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" Sep 4 04:27:32.836214 containerd[1612]: 2025-09-04 04:27:32.513 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--kfp9d-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"43cb4990-cf2d-4baf-b7d5-c875eeaa23e6", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-kfp9d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90b99a039a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:32.836214 containerd[1612]: 2025-09-04 04:27:32.513 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" Sep 4 04:27:32.836309 containerd[1612]: 2025-09-04 04:27:32.513 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90b99a039a9 ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" Sep 4 04:27:32.836309 containerd[1612]: 2025-09-04 04:27:32.516 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" Sep 4 04:27:32.836363 containerd[1612]: 2025-09-04 04:27:32.517 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--kfp9d-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"43cb4990-cf2d-4baf-b7d5-c875eeaa23e6", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae", Pod:"goldmane-7988f88666-kfp9d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90b99a039a9", MAC:"6e:e3:09:1f:fc:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:32.836428 containerd[1612]: 2025-09-04 04:27:32.832 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" Namespace="calico-system" Pod="goldmane-7988f88666-kfp9d" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--kfp9d-eth0" Sep 4 04:27:32.841885 kubelet[2728]: I0904 04:27:32.841835 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 04:27:32.996025 systemd-networkd[1492]: cali87e0f46da53: Link UP Sep 4 04:27:32.996800 systemd-networkd[1492]: cali87e0f46da53: Gained carrier Sep 4 04:27:33.077318 containerd[1612]: 2025-09-04 04:27:32.054 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0 coredns-7c65d6cfc9- kube-system fc310726-94cb-4b24-8d5b-a83ff5c37bf9 870 0 2025-09-04 04:26:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-ncsgv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali87e0f46da53 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-" Sep 4 04:27:33.077318 containerd[1612]: 2025-09-04 04:27:32.054 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" Sep 4 04:27:33.077318 containerd[1612]: 2025-09-04 04:27:32.112 [INFO][4926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" HandleID="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Workload="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.112 [INFO][4926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" HandleID="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Workload="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a32b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-ncsgv", "timestamp":"2025-09-04 04:27:32.112717515 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.112 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.510 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.510 [INFO][4926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.709 [INFO][4926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" host="localhost" Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.900 [INFO][4926] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.944 [INFO][4926] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.947 [INFO][4926] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.958 [INFO][4926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:33.077906 containerd[1612]: 2025-09-04 04:27:32.958 [INFO][4926] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" host="localhost" Sep 4 04:27:33.078237 containerd[1612]: 2025-09-04 04:27:32.960 [INFO][4926] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014 Sep 4 04:27:33.078237 containerd[1612]: 2025-09-04 04:27:32.978 [INFO][4926] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" host="localhost" Sep 4 04:27:33.078237 containerd[1612]: 2025-09-04 04:27:32.988 [INFO][4926] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" host="localhost" Sep 4 04:27:33.078237 containerd[1612]: 2025-09-04 04:27:32.988 [INFO][4926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" host="localhost" Sep 4 04:27:33.078237 containerd[1612]: 2025-09-04 04:27:32.988 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:33.078237 containerd[1612]: 2025-09-04 04:27:32.988 [INFO][4926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" HandleID="k8s-pod-network.4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Workload="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" Sep 4 04:27:33.079651 containerd[1612]: 2025-09-04 04:27:32.992 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fc310726-94cb-4b24-8d5b-a83ff5c37bf9", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-ncsgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87e0f46da53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:33.079748 containerd[1612]: 2025-09-04 04:27:32.992 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" Sep 4 04:27:33.079748 containerd[1612]: 2025-09-04 04:27:32.992 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87e0f46da53 ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" Sep 4 04:27:33.079748 containerd[1612]: 2025-09-04 04:27:32.997 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" Sep 4 04:27:33.079831 containerd[1612]: 2025-09-04 04:27:32.997 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fc310726-94cb-4b24-8d5b-a83ff5c37bf9", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014", Pod:"coredns-7c65d6cfc9-ncsgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87e0f46da53", MAC:"fa:2a:d2:35:18:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:33.079831 containerd[1612]: 2025-09-04 04:27:33.071 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncsgv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ncsgv-eth0" Sep 4 04:27:33.132642 containerd[1612]: time="2025-09-04T04:27:33.132140480Z" level=info msg="connecting to shim 4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014" address="unix:///run/containerd/s/3778d9b56bb0f7043489eaf9c6ca188bf677ceeb9d95e19ec3942291aed21e81" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:33.133371 containerd[1612]: time="2025-09-04T04:27:33.133338215Z" level=info msg="connecting to shim ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae" address="unix:///run/containerd/s/7fcde0fc959674c0939cb916beea55ae6e502aa4ba27ed323d232b5a636050fb" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:33.155828 systemd-networkd[1492]: calice125ab601a: Link UP Sep 4 04:27:33.158021 systemd-networkd[1492]: calice125ab601a: Gained carrier Sep 4 04:27:33.159049 systemd-networkd[1492]: cali06381914f55: Gained IPv6LL Sep 4 04:27:33.188747 systemd[1]: Started cri-containerd-ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae.scope - libcontainer container ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae. Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:32.196 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0 calico-kube-controllers-854984d5c7- calico-system 70ee29dd-1950-4e60-940b-ea22b976f88f 880 0 2025-09-04 04:26:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:854984d5c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-854984d5c7-kpvmr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice125ab601a [] [] }} ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:32.196 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:32.237 [INFO][4943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" HandleID="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Workload="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:32.238 [INFO][4943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" HandleID="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Workload="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c2140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-854984d5c7-kpvmr", "timestamp":"2025-09-04 04:27:32.237691931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:32.238 [INFO][4943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:32.988 [INFO][4943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:32.988 [INFO][4943] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.070 [INFO][4943] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.079 [INFO][4943] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.089 [INFO][4943] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.095 [INFO][4943] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.101 [INFO][4943] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.101 [INFO][4943] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.107 [INFO][4943] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3 Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.113 [INFO][4943] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.127 [INFO][4943] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.129 [INFO][4943] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" host="localhost" Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.129 [INFO][4943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:33.208459 containerd[1612]: 2025-09-04 04:27:33.129 [INFO][4943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" HandleID="k8s-pod-network.36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Workload="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" Sep 4 04:27:33.209099 containerd[1612]: 2025-09-04 04:27:33.148 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0", GenerateName:"calico-kube-controllers-854984d5c7-", Namespace:"calico-system", SelfLink:"", UID:"70ee29dd-1950-4e60-940b-ea22b976f88f", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854984d5c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-854984d5c7-kpvmr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice125ab601a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:33.209099 containerd[1612]: 2025-09-04 04:27:33.148 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" Sep 4 04:27:33.209099 containerd[1612]: 2025-09-04 04:27:33.148 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice125ab601a ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" Sep 4 04:27:33.209099 containerd[1612]: 2025-09-04 04:27:33.168 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" Sep 4 04:27:33.209099 containerd[1612]: 2025-09-04 04:27:33.169 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0", GenerateName:"calico-kube-controllers-854984d5c7-", Namespace:"calico-system", SelfLink:"", UID:"70ee29dd-1950-4e60-940b-ea22b976f88f", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854984d5c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3", Pod:"calico-kube-controllers-854984d5c7-kpvmr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice125ab601a", MAC:"da:fb:02:21:1f:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:33.209099 containerd[1612]: 2025-09-04 04:27:33.192 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" Namespace="calico-system" Pod="calico-kube-controllers-854984d5c7-kpvmr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854984d5c7--kpvmr-eth0" Sep 4 04:27:33.216807 systemd[1]: Started cri-containerd-4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014.scope - libcontainer container 4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014. Sep 4 04:27:33.242100 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:33.250130 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:33.265269 containerd[1612]: time="2025-09-04T04:27:33.265188380Z" level=info msg="connecting to shim 36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3" address="unix:///run/containerd/s/7999a4cbb81399334b77b0a3d0449fac329e58fab742d43fe352bcfa7df3b31e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:33.311398 containerd[1612]: time="2025-09-04T04:27:33.311255657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncsgv,Uid:fc310726-94cb-4b24-8d5b-a83ff5c37bf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014\"" Sep 4 04:27:33.314324 kubelet[2728]: E0904 04:27:33.314266 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:33.323570 systemd[1]: Started cri-containerd-36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3.scope - libcontainer container 36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3. Sep 4 04:27:33.334827 containerd[1612]: time="2025-09-04T04:27:33.333885968Z" level=info msg="CreateContainer within sandbox \"4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 04:27:33.348915 containerd[1612]: time="2025-09-04T04:27:33.348835155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kfp9d,Uid:43cb4990-cf2d-4baf-b7d5-c875eeaa23e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae\"" Sep 4 04:27:33.375051 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:33.532461 containerd[1612]: time="2025-09-04T04:27:33.532396454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854984d5c7-kpvmr,Uid:70ee29dd-1950-4e60-940b-ea22b976f88f,Namespace:calico-system,Attempt:0,} returns sandbox id \"36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3\"" Sep 4 04:27:33.972814 kubelet[2728]: E0904 04:27:33.972752 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:33.973323 containerd[1612]: time="2025-09-04T04:27:33.973268166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlfrf,Uid:ceca6a0d-4411-4bf5-9886-bc8bec807f34,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:34.066715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153574126.mount: Deactivated successfully. Sep 4 04:27:34.154837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391366592.mount: Deactivated successfully. Sep 4 04:27:34.157740 containerd[1612]: time="2025-09-04T04:27:34.157691191Z" level=info msg="Container d44bd8271a6c12872bf5ed202fc7e62dc65bab4106591c003119a74f4429e20a: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:34.162432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147989916.mount: Deactivated successfully. Sep 4 04:27:34.254504 containerd[1612]: time="2025-09-04T04:27:34.254267351Z" level=info msg="CreateContainer within sandbox \"4f2537aeb57575fed2133929186905597e073095a0239a2f8dc79f2d5cfb2014\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d44bd8271a6c12872bf5ed202fc7e62dc65bab4106591c003119a74f4429e20a\"" Sep 4 04:27:34.255248 containerd[1612]: time="2025-09-04T04:27:34.255219816Z" level=info msg="StartContainer for \"d44bd8271a6c12872bf5ed202fc7e62dc65bab4106591c003119a74f4429e20a\"" Sep 4 04:27:34.256473 containerd[1612]: time="2025-09-04T04:27:34.256430885Z" level=info msg="connecting to shim d44bd8271a6c12872bf5ed202fc7e62dc65bab4106591c003119a74f4429e20a" address="unix:///run/containerd/s/3778d9b56bb0f7043489eaf9c6ca188bf677ceeb9d95e19ec3942291aed21e81" protocol=ttrpc version=3 Sep 4 04:27:34.275324 containerd[1612]: time="2025-09-04T04:27:34.273723594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:34.275511 containerd[1612]: time="2025-09-04T04:27:34.275368197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 04:27:34.276541 containerd[1612]: time="2025-09-04T04:27:34.276509165Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:34.281205 containerd[1612]: time="2025-09-04T04:27:34.281174799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:34.282698 containerd[1612]: time="2025-09-04T04:27:34.282658350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.791210599s" Sep 4 04:27:34.282698 containerd[1612]: time="2025-09-04T04:27:34.282688466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 04:27:34.284711 containerd[1612]: time="2025-09-04T04:27:34.284690729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 04:27:34.285616 containerd[1612]: time="2025-09-04T04:27:34.285551271Z" level=info msg="CreateContainer within sandbox \"cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 04:27:34.307430 containerd[1612]: time="2025-09-04T04:27:34.307370955Z" level=info msg="Container 1e3adb02ebe8ffa0c9e09bcef349fe2477ea8d0f250d65ead06c3ca2a1ca5f22: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:34.311758 systemd[1]: Started cri-containerd-d44bd8271a6c12872bf5ed202fc7e62dc65bab4106591c003119a74f4429e20a.scope - libcontainer container d44bd8271a6c12872bf5ed202fc7e62dc65bab4106591c003119a74f4429e20a. Sep 4 04:27:34.326319 containerd[1612]: time="2025-09-04T04:27:34.322681920Z" level=info msg="CreateContainer within sandbox \"cf3861888714c17dc594077089932c3b48ddb031ffa6762731a289135539ff50\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1e3adb02ebe8ffa0c9e09bcef349fe2477ea8d0f250d65ead06c3ca2a1ca5f22\"" Sep 4 04:27:34.326319 containerd[1612]: time="2025-09-04T04:27:34.324372388Z" level=info msg="StartContainer for \"1e3adb02ebe8ffa0c9e09bcef349fe2477ea8d0f250d65ead06c3ca2a1ca5f22\"" Sep 4 04:27:34.326555 containerd[1612]: time="2025-09-04T04:27:34.326454850Z" level=info msg="connecting to shim 1e3adb02ebe8ffa0c9e09bcef349fe2477ea8d0f250d65ead06c3ca2a1ca5f22" address="unix:///run/containerd/s/03795ff49590eb777e169195a1567bb5b45be1780197a91c5afa71f8699c840f" protocol=ttrpc version=3 Sep 4 04:27:34.370507 systemd[1]: Started cri-containerd-1e3adb02ebe8ffa0c9e09bcef349fe2477ea8d0f250d65ead06c3ca2a1ca5f22.scope - libcontainer container 1e3adb02ebe8ffa0c9e09bcef349fe2477ea8d0f250d65ead06c3ca2a1ca5f22. Sep 4 04:27:34.438571 systemd-networkd[1492]: cali90b99a039a9: Gained IPv6LL Sep 4 04:27:34.456971 systemd-networkd[1492]: calib8991e88606: Link UP Sep 4 04:27:34.458137 systemd-networkd[1492]: calib8991e88606: Gained carrier Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.302 [INFO][5123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0 coredns-7c65d6cfc9- kube-system ceca6a0d-4411-4bf5-9886-bc8bec807f34 876 0 2025-09-04 04:26:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-hlfrf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib8991e88606 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.302 [INFO][5123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.354 [INFO][5154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" HandleID="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Workload="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.354 [INFO][5154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" HandleID="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Workload="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-hlfrf", "timestamp":"2025-09-04 04:27:34.353653836 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.354 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.354 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.355 [INFO][5154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.369 [INFO][5154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.380 [INFO][5154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.389 [INFO][5154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.392 [INFO][5154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.396 [INFO][5154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.396 [INFO][5154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.398 [INFO][5154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1 Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.426 [INFO][5154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.446 [INFO][5154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.446 [INFO][5154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" host="localhost" Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.446 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 04:27:34.505201 containerd[1612]: 2025-09-04 04:27:34.446 [INFO][5154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" HandleID="k8s-pod-network.01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Workload="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" Sep 4 04:27:34.505902 containerd[1612]: 2025-09-04 04:27:34.450 [INFO][5123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ceca6a0d-4411-4bf5-9886-bc8bec807f34", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-hlfrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8991e88606", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:34.505902 containerd[1612]: 2025-09-04 04:27:34.451 [INFO][5123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" Sep 4 04:27:34.505902 containerd[1612]: 2025-09-04 04:27:34.451 [INFO][5123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8991e88606 ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" Sep 4 04:27:34.505902 containerd[1612]: 2025-09-04 04:27:34.457 [INFO][5123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" Sep 4 04:27:34.505902 containerd[1612]: 2025-09-04 04:27:34.458 [INFO][5123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ceca6a0d-4411-4bf5-9886-bc8bec807f34", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 4, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1", Pod:"coredns-7c65d6cfc9-hlfrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8991e88606", MAC:"8a:f5:8d:f2:30:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 04:27:34.505902 containerd[1612]: 2025-09-04 04:27:34.501 [INFO][5123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlfrf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hlfrf-eth0" Sep 4 04:27:34.631554 systemd-networkd[1492]: calice125ab601a: Gained IPv6LL Sep 4 04:27:34.736726 containerd[1612]: time="2025-09-04T04:27:34.736670217Z" level=info msg="StartContainer for \"d44bd8271a6c12872bf5ed202fc7e62dc65bab4106591c003119a74f4429e20a\" returns successfully" Sep 4 04:27:34.738077 containerd[1612]: time="2025-09-04T04:27:34.738043621Z" level=info msg="StartContainer for \"1e3adb02ebe8ffa0c9e09bcef349fe2477ea8d0f250d65ead06c3ca2a1ca5f22\" returns successfully" Sep 4 04:27:34.837251 containerd[1612]: time="2025-09-04T04:27:34.836551559Z" level=info msg="connecting to shim 01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1" address="unix:///run/containerd/s/a4f3d04c4c018c595334620626c94493d23c0a35f8bfb85928e1ccdab5fd3f30" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:34.868001 kubelet[2728]: E0904 04:27:34.867934 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:34.868627 systemd[1]: Started cri-containerd-01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1.scope - libcontainer container 01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1. Sep 4 04:27:34.888916 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:34.951630 systemd-networkd[1492]: cali87e0f46da53: Gained IPv6LL Sep 4 04:27:35.007528 containerd[1612]: time="2025-09-04T04:27:35.007467472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlfrf,Uid:ceca6a0d-4411-4bf5-9886-bc8bec807f34,Namespace:kube-system,Attempt:0,} returns sandbox id \"01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1\"" Sep 4 04:27:35.008371 kubelet[2728]: E0904 04:27:35.008322 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:35.010005 containerd[1612]: time="2025-09-04T04:27:35.009944665Z" level=info msg="CreateContainer within sandbox \"01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 04:27:35.098459 kubelet[2728]: I0904 04:27:35.097581 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6db7d976d9-x92pk" podStartSLOduration=2.490825862 podStartE2EDuration="15.097554711s" podCreationTimestamp="2025-09-04 04:27:20 +0000 UTC" firstStartedPulling="2025-09-04 04:27:21.677090978 +0000 UTC m=+49.942000622" lastFinishedPulling="2025-09-04 04:27:34.283819827 +0000 UTC m=+62.548729471" observedRunningTime="2025-09-04 04:27:34.870612354 +0000 UTC m=+63.135522009" watchObservedRunningTime="2025-09-04 04:27:35.097554711 +0000 UTC m=+63.362464355" Sep 4 04:27:35.098897 kubelet[2728]: I0904 04:27:35.098828 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ncsgv" podStartSLOduration=57.09881892 podStartE2EDuration="57.09881892s" podCreationTimestamp="2025-09-04 04:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:27:35.098785037 +0000 UTC m=+63.363694681" watchObservedRunningTime="2025-09-04 04:27:35.09881892 +0000 UTC m=+63.363728554" Sep 4 04:27:35.111089 containerd[1612]: time="2025-09-04T04:27:35.111023734Z" level=info msg="Container f1210e64e1d6e45c2c9c271f7033cc6da5c0d8a42d3a56e17f432be234a6ed7e: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:35.119467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199861058.mount: Deactivated successfully. Sep 4 04:27:35.123855 containerd[1612]: time="2025-09-04T04:27:35.123774431Z" level=info msg="CreateContainer within sandbox \"01b40d4f02cdb1cfda60e24fa425951c949b624e08aff51be713345052c669a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1210e64e1d6e45c2c9c271f7033cc6da5c0d8a42d3a56e17f432be234a6ed7e\"" Sep 4 04:27:35.125446 containerd[1612]: time="2025-09-04T04:27:35.125196105Z" level=info msg="StartContainer for \"f1210e64e1d6e45c2c9c271f7033cc6da5c0d8a42d3a56e17f432be234a6ed7e\"" Sep 4 04:27:35.126599 containerd[1612]: time="2025-09-04T04:27:35.126567095Z" level=info msg="connecting to shim f1210e64e1d6e45c2c9c271f7033cc6da5c0d8a42d3a56e17f432be234a6ed7e" address="unix:///run/containerd/s/a4f3d04c4c018c595334620626c94493d23c0a35f8bfb85928e1ccdab5fd3f30" protocol=ttrpc version=3 Sep 4 04:27:35.157674 systemd[1]: Started cri-containerd-f1210e64e1d6e45c2c9c271f7033cc6da5c0d8a42d3a56e17f432be234a6ed7e.scope - libcontainer container f1210e64e1d6e45c2c9c271f7033cc6da5c0d8a42d3a56e17f432be234a6ed7e. Sep 4 04:27:35.202547 containerd[1612]: time="2025-09-04T04:27:35.202391693Z" level=info msg="StartContainer for \"f1210e64e1d6e45c2c9c271f7033cc6da5c0d8a42d3a56e17f432be234a6ed7e\" returns successfully" Sep 4 04:27:35.654507 systemd-networkd[1492]: calib8991e88606: Gained IPv6LL Sep 4 04:27:35.876507 kubelet[2728]: E0904 04:27:35.876211 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:35.876507 kubelet[2728]: E0904 04:27:35.876344 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:35.889434 kubelet[2728]: I0904 04:27:35.889240 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hlfrf" podStartSLOduration=57.889221469 podStartE2EDuration="57.889221469s" podCreationTimestamp="2025-09-04 04:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:27:35.888904304 +0000 UTC m=+64.153813968" watchObservedRunningTime="2025-09-04 04:27:35.889221469 +0000 UTC m=+64.154131113" Sep 4 04:27:36.414971 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:49568.service - OpenSSH per-connection server daemon (10.0.0.1:49568). Sep 4 04:27:36.461408 containerd[1612]: time="2025-09-04T04:27:36.461317092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:36.462318 containerd[1612]: time="2025-09-04T04:27:36.462235513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 04:27:36.463636 containerd[1612]: time="2025-09-04T04:27:36.463531261Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:36.465573 containerd[1612]: time="2025-09-04T04:27:36.465546428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:36.466313 containerd[1612]: time="2025-09-04T04:27:36.466057586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.181341299s" Sep 4 04:27:36.466313 containerd[1612]: time="2025-09-04T04:27:36.466105506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 04:27:36.467848 containerd[1612]: time="2025-09-04T04:27:36.467791196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 04:27:36.469104 containerd[1612]: time="2025-09-04T04:27:36.469033504Z" level=info msg="CreateContainer within sandbox \"9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 04:27:36.490713 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 49568 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:36.492818 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:36.498765 systemd-logind[1509]: New session 9 of user core. Sep 4 04:27:36.509562 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 04:27:36.518774 containerd[1612]: time="2025-09-04T04:27:36.518712531Z" level=info msg="Container a5827324b28eb1c90406cb62852cccd6fdd9c2982438626990bd7ac33bcb6135: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:36.744477 containerd[1612]: time="2025-09-04T04:27:36.744431081Z" level=info msg="CreateContainer within sandbox \"9a2afff1127ee494ee6314e61cd8d47208a10b329ecb6018797878ce03f30a08\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a5827324b28eb1c90406cb62852cccd6fdd9c2982438626990bd7ac33bcb6135\"" Sep 4 04:27:36.746849 containerd[1612]: time="2025-09-04T04:27:36.746797296Z" level=info msg="StartContainer for \"a5827324b28eb1c90406cb62852cccd6fdd9c2982438626990bd7ac33bcb6135\"" Sep 4 04:27:36.748183 containerd[1612]: time="2025-09-04T04:27:36.748159579Z" level=info msg="connecting to shim a5827324b28eb1c90406cb62852cccd6fdd9c2982438626990bd7ac33bcb6135" address="unix:///run/containerd/s/667691ecf5ef60fc15430275040d91886742e3f53a9c79d174605496358aa96e" protocol=ttrpc version=3 Sep 4 04:27:36.767394 sshd[5325]: Connection closed by 10.0.0.1 port 49568 Sep 4 04:27:36.767493 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:36.774608 systemd[1]: Started cri-containerd-a5827324b28eb1c90406cb62852cccd6fdd9c2982438626990bd7ac33bcb6135.scope - libcontainer container a5827324b28eb1c90406cb62852cccd6fdd9c2982438626990bd7ac33bcb6135. Sep 4 04:27:36.775324 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:49568.service: Deactivated successfully. Sep 4 04:27:36.779344 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 04:27:36.781949 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Sep 4 04:27:36.784584 systemd-logind[1509]: Removed session 9. Sep 4 04:27:36.963437 containerd[1612]: time="2025-09-04T04:27:36.963392075Z" level=info msg="StartContainer for \"a5827324b28eb1c90406cb62852cccd6fdd9c2982438626990bd7ac33bcb6135\" returns successfully" Sep 4 04:27:36.964229 kubelet[2728]: E0904 04:27:36.964205 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:36.965319 kubelet[2728]: E0904 04:27:36.965171 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:37.157775 kubelet[2728]: I0904 04:27:37.157613 2728 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 04:27:37.157775 kubelet[2728]: I0904 04:27:37.157659 2728 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 04:27:37.966594 kubelet[2728]: E0904 04:27:37.966476 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:37.981190 kubelet[2728]: I0904 04:27:37.981111 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bhrnv" podStartSLOduration=31.91594649 podStartE2EDuration="45.9810903s" podCreationTimestamp="2025-09-04 04:26:52 +0000 UTC" firstStartedPulling="2025-09-04 04:27:22.401968503 +0000 UTC m=+50.666878147" lastFinishedPulling="2025-09-04 04:27:36.467112313 +0000 UTC m=+64.732021957" observedRunningTime="2025-09-04 04:27:37.97994854 +0000 UTC m=+66.244858184" watchObservedRunningTime="2025-09-04 04:27:37.9810903 +0000 UTC m=+66.245999944" Sep 4 04:27:38.968492 kubelet[2728]: E0904 04:27:38.968428 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:39.143314 containerd[1612]: time="2025-09-04T04:27:39.143107569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8\" id:\"49bafc28ca9567c9956f98b8a90848181736f4433cf7504047bcb04e809bdcd4\" pid:5394 exited_at:{seconds:1756960059 nanos:142772482}" Sep 4 04:27:39.351176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788972951.mount: Deactivated successfully. Sep 4 04:27:40.188513 containerd[1612]: time="2025-09-04T04:27:40.188433652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:40.189507 containerd[1612]: time="2025-09-04T04:27:40.189456178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 04:27:40.190811 containerd[1612]: time="2025-09-04T04:27:40.190771634Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:40.193418 containerd[1612]: time="2025-09-04T04:27:40.193381996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:40.194186 containerd[1612]: time="2025-09-04T04:27:40.194138795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.726289611s" Sep 4 04:27:40.194186 containerd[1612]: time="2025-09-04T04:27:40.194182828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 04:27:40.195303 containerd[1612]: time="2025-09-04T04:27:40.195125594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 04:27:40.197814 containerd[1612]: time="2025-09-04T04:27:40.197770582Z" level=info msg="CreateContainer within sandbox \"ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 04:27:40.209868 containerd[1612]: time="2025-09-04T04:27:40.209806720Z" level=info msg="Container e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:40.225265 containerd[1612]: time="2025-09-04T04:27:40.225200923Z" level=info msg="CreateContainer within sandbox \"ddb8931c20f97f587650ef63005234e7ab9dfc7832d3dfb17e77e8bff5d1a1ae\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\"" Sep 4 04:27:40.225856 containerd[1612]: time="2025-09-04T04:27:40.225827457Z" level=info msg="StartContainer for \"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\"" Sep 4 04:27:40.226877 containerd[1612]: time="2025-09-04T04:27:40.226851366Z" level=info msg="connecting to shim e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852" address="unix:///run/containerd/s/7fcde0fc959674c0939cb916beea55ae6e502aa4ba27ed323d232b5a636050fb" protocol=ttrpc version=3 Sep 4 04:27:40.255523 systemd[1]: Started cri-containerd-e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852.scope - libcontainer container e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852. Sep 4 04:27:40.318812 containerd[1612]: time="2025-09-04T04:27:40.318760066Z" level=info msg="StartContainer for \"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\" returns successfully" Sep 4 04:27:41.067518 containerd[1612]: time="2025-09-04T04:27:41.067457416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\" id:\"fa5aa87e70d7c00e89d34da41eb007a429944aeeddaf023d8967ceaf4d07673e\" pid:5462 exit_status:1 exited_at:{seconds:1756960061 nanos:66956367}" Sep 4 04:27:41.782036 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:45482.service - OpenSSH per-connection server daemon (10.0.0.1:45482). Sep 4 04:27:41.872893 sshd[5483]: Accepted publickey for core from 10.0.0.1 port 45482 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:41.875217 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:41.881137 systemd-logind[1509]: New session 10 of user core. Sep 4 04:27:41.888447 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 04:27:42.069569 sshd[5486]: Connection closed by 10.0.0.1 port 45482 Sep 4 04:27:42.069857 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:42.078973 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:45482.service: Deactivated successfully. Sep 4 04:27:42.081575 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 04:27:42.083911 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Sep 4 04:27:42.087773 systemd-logind[1509]: Removed session 10. Sep 4 04:27:42.089055 containerd[1612]: time="2025-09-04T04:27:42.088825776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\" id:\"d473d681811c33be7533a7b6aa737d1b252350897495b1823ef969e8603d229d\" pid:5509 exit_status:1 exited_at:{seconds:1756960062 nanos:88276466}" Sep 4 04:27:43.484623 containerd[1612]: time="2025-09-04T04:27:43.484549645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:43.486055 containerd[1612]: time="2025-09-04T04:27:43.486018358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 04:27:43.487470 containerd[1612]: time="2025-09-04T04:27:43.487427589Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:43.489937 containerd[1612]: time="2025-09-04T04:27:43.489865128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:43.490541 containerd[1612]: time="2025-09-04T04:27:43.490487434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.29533555s" Sep 4 04:27:43.490541 containerd[1612]: time="2025-09-04T04:27:43.490536086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 04:27:43.500724 containerd[1612]: time="2025-09-04T04:27:43.500668918Z" level=info msg="CreateContainer within sandbox \"36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 04:27:43.512825 containerd[1612]: time="2025-09-04T04:27:43.512744971Z" level=info msg="Container dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:43.536859 containerd[1612]: time="2025-09-04T04:27:43.536800418Z" level=info msg="CreateContainer within sandbox \"36c3e48c152747b1aa53cf6156ddb67f93729f3f038760bf15c677536bae9da3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17\"" Sep 4 04:27:43.537401 containerd[1612]: time="2025-09-04T04:27:43.537346332Z" level=info msg="StartContainer for \"dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17\"" Sep 4 04:27:43.538552 containerd[1612]: time="2025-09-04T04:27:43.538517507Z" level=info msg="connecting to shim dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17" address="unix:///run/containerd/s/7999a4cbb81399334b77b0a3d0449fac329e58fab742d43fe352bcfa7df3b31e" protocol=ttrpc version=3 Sep 4 04:27:43.561461 systemd[1]: Started cri-containerd-dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17.scope - libcontainer container dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17. Sep 4 04:27:43.615864 containerd[1612]: time="2025-09-04T04:27:43.615768078Z" level=info msg="StartContainer for \"dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17\" returns successfully" Sep 4 04:27:44.040474 containerd[1612]: time="2025-09-04T04:27:44.040413870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17\" id:\"39362d94e5b60b522e073d0ffe3c825a8248d4b5fed20f0749351788befc5983\" pid:5587 exited_at:{seconds:1756960064 nanos:39990016}" Sep 4 04:27:44.045966 kubelet[2728]: I0904 04:27:44.045385 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-854984d5c7-kpvmr" podStartSLOduration=42.088996839 podStartE2EDuration="52.045363448s" podCreationTimestamp="2025-09-04 04:26:52 +0000 UTC" firstStartedPulling="2025-09-04 04:27:33.535063071 +0000 UTC m=+61.799972725" lastFinishedPulling="2025-09-04 04:27:43.49142969 +0000 UTC m=+71.756339334" observedRunningTime="2025-09-04 04:27:44.044973306 +0000 UTC m=+72.309882960" watchObservedRunningTime="2025-09-04 04:27:44.045363448 +0000 UTC m=+72.310273092" Sep 4 04:27:44.048877 kubelet[2728]: I0904 04:27:44.047417 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-kfp9d" podStartSLOduration=46.206782968 podStartE2EDuration="53.047400716s" podCreationTimestamp="2025-09-04 04:26:51 +0000 UTC" firstStartedPulling="2025-09-04 04:27:33.354404493 +0000 UTC m=+61.619314137" lastFinishedPulling="2025-09-04 04:27:40.195022231 +0000 UTC m=+68.459931885" observedRunningTime="2025-09-04 04:27:40.990407273 +0000 UTC m=+69.255316917" watchObservedRunningTime="2025-09-04 04:27:44.047400716 +0000 UTC m=+72.312310370" Sep 4 04:27:46.973228 kubelet[2728]: E0904 04:27:46.973179 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:47.093963 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:45494.service - OpenSSH per-connection server daemon (10.0.0.1:45494). Sep 4 04:27:47.159670 sshd[5600]: Accepted publickey for core from 10.0.0.1 port 45494 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:47.161392 sshd-session[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:47.169076 systemd-logind[1509]: New session 11 of user core. Sep 4 04:27:47.178440 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 04:27:47.192946 kubelet[2728]: I0904 04:27:47.192904 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 04:27:47.329481 sshd[5603]: Connection closed by 10.0.0.1 port 45494 Sep 4 04:27:47.329789 sshd-session[5600]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:47.340866 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:45494.service: Deactivated successfully. Sep 4 04:27:47.343957 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 04:27:47.344974 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Sep 4 04:27:47.352400 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:45510.service - OpenSSH per-connection server daemon (10.0.0.1:45510). Sep 4 04:27:47.353335 systemd-logind[1509]: Removed session 11. Sep 4 04:27:47.433271 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 45510 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:47.435346 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:47.440961 systemd-logind[1509]: New session 12 of user core. Sep 4 04:27:47.451603 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 04:27:47.628775 sshd[5622]: Connection closed by 10.0.0.1 port 45510 Sep 4 04:27:47.629515 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:47.639390 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:45510.service: Deactivated successfully. Sep 4 04:27:47.641934 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 04:27:47.642982 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Sep 4 04:27:47.650538 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:45516.service - OpenSSH per-connection server daemon (10.0.0.1:45516). Sep 4 04:27:47.651978 systemd-logind[1509]: Removed session 12. Sep 4 04:27:47.710022 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 45516 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:47.712087 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:47.718509 systemd-logind[1509]: New session 13 of user core. Sep 4 04:27:47.726621 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 04:27:47.860063 sshd[5636]: Connection closed by 10.0.0.1 port 45516 Sep 4 04:27:47.860522 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:47.866068 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:45516.service: Deactivated successfully. Sep 4 04:27:47.868517 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 04:27:47.869420 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Sep 4 04:27:47.871174 systemd-logind[1509]: Removed session 13. Sep 4 04:27:52.879703 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:56452.service - OpenSSH per-connection server daemon (10.0.0.1:56452). Sep 4 04:27:52.951615 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 56452 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:52.953686 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:52.962724 systemd-logind[1509]: New session 14 of user core. Sep 4 04:27:52.975588 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 04:27:53.217719 sshd[5657]: Connection closed by 10.0.0.1 port 56452 Sep 4 04:27:53.218394 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:53.225770 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:56452.service: Deactivated successfully. Sep 4 04:27:53.228418 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 04:27:53.229373 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Sep 4 04:27:53.231147 systemd-logind[1509]: Removed session 14. Sep 4 04:27:55.973537 kubelet[2728]: E0904 04:27:55.973468 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:58.234625 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:56460.service - OpenSSH per-connection server daemon (10.0.0.1:56460). Sep 4 04:27:58.316329 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 56460 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:27:58.318899 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:58.326229 systemd-logind[1509]: New session 15 of user core. Sep 4 04:27:58.334474 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 04:27:58.699819 sshd[5680]: Connection closed by 10.0.0.1 port 56460 Sep 4 04:27:58.700340 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:58.705799 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:56460.service: Deactivated successfully. Sep 4 04:27:58.708013 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 04:27:58.708976 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Sep 4 04:27:58.710293 systemd-logind[1509]: Removed session 15. Sep 4 04:27:59.973152 kubelet[2728]: E0904 04:27:59.973094 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:00.261971 containerd[1612]: time="2025-09-04T04:28:00.261803138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17\" id:\"221031158999f9440ac3c3b1879ed019f296f711ca86ecdd2bfca4c322ea8343\" pid:5704 exited_at:{seconds:1756960080 nanos:261466959}" Sep 4 04:28:03.715111 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:38070.service - OpenSSH per-connection server daemon (10.0.0.1:38070). Sep 4 04:28:03.786769 sshd[5721]: Accepted publickey for core from 10.0.0.1 port 38070 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:03.788419 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:03.793837 systemd-logind[1509]: New session 16 of user core. Sep 4 04:28:03.800472 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 04:28:03.955956 sshd[5724]: Connection closed by 10.0.0.1 port 38070 Sep 4 04:28:03.956420 sshd-session[5721]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:03.962216 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:38070.service: Deactivated successfully. Sep 4 04:28:03.965050 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 04:28:03.966351 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Sep 4 04:28:03.968097 systemd-logind[1509]: Removed session 16. Sep 4 04:28:04.972841 kubelet[2728]: E0904 04:28:04.972420 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:05.701697 containerd[1612]: time="2025-09-04T04:28:05.701629927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17\" id:\"5f541ddc38d6ef02415c1d7d50579fc85b269c46a423f6f60626ecc78996e51a\" pid:5765 exited_at:{seconds:1756960085 nanos:701407710}" Sep 4 04:28:05.735754 containerd[1612]: time="2025-09-04T04:28:05.735708148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\" id:\"62c5b39c4cfd025cdb8aef1f61a92c1ea613042a638b6e313fc16f2eb6ab4d4c\" pid:5748 exited_at:{seconds:1756960085 nanos:735338134}" Sep 4 04:28:08.975208 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:38084.service - OpenSSH per-connection server daemon (10.0.0.1:38084). Sep 4 04:28:09.087070 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 38084 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:09.089826 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:09.097867 systemd-logind[1509]: New session 17 of user core. Sep 4 04:28:09.102892 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 04:28:09.169469 containerd[1612]: time="2025-09-04T04:28:09.169410142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18a61689b07f1391de9892494387c6a9902a46378521ec93dd38a833e77f8bb8\" id:\"907e324ba761d9d2f77ab59137d84544c36cf08de6a08b38eb76fac1670d00e9\" pid:5802 exited_at:{seconds:1756960089 nanos:168674212}" Sep 4 04:28:09.284844 sshd[5813]: Connection closed by 10.0.0.1 port 38084 Sep 4 04:28:09.296741 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:38092.service - OpenSSH per-connection server daemon (10.0.0.1:38092). Sep 4 04:28:09.310797 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:09.315713 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:38084.service: Deactivated successfully. Sep 4 04:28:09.318210 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 04:28:09.319329 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Sep 4 04:28:09.321062 systemd-logind[1509]: Removed session 17. Sep 4 04:28:09.387931 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 38092 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:09.389693 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:09.397105 systemd-logind[1509]: New session 18 of user core. Sep 4 04:28:09.409596 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 04:28:09.765502 sshd[5830]: Connection closed by 10.0.0.1 port 38092 Sep 4 04:28:09.766448 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:09.781014 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:38108.service - OpenSSH per-connection server daemon (10.0.0.1:38108). Sep 4 04:28:09.783884 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:38092.service: Deactivated successfully. Sep 4 04:28:09.787073 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 04:28:09.790708 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Sep 4 04:28:09.792920 systemd-logind[1509]: Removed session 18. Sep 4 04:28:09.858308 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 38108 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:09.860610 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:09.865878 systemd-logind[1509]: New session 19 of user core. Sep 4 04:28:09.872540 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 04:28:12.089114 sshd[5845]: Connection closed by 10.0.0.1 port 38108 Sep 4 04:28:12.091601 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:12.103684 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:38108.service: Deactivated successfully. Sep 4 04:28:12.108749 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 04:28:12.109141 systemd[1]: session-19.scope: Consumed 690ms CPU time, 72.7M memory peak. Sep 4 04:28:12.110474 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Sep 4 04:28:12.113989 systemd-logind[1509]: Removed session 19. Sep 4 04:28:12.116338 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:41434.service - OpenSSH per-connection server daemon (10.0.0.1:41434). Sep 4 04:28:12.192302 sshd[5865]: Accepted publickey for core from 10.0.0.1 port 41434 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:12.194517 sshd-session[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:12.200377 systemd-logind[1509]: New session 20 of user core. Sep 4 04:28:12.205570 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 04:28:13.020455 sshd[5868]: Connection closed by 10.0.0.1 port 41434 Sep 4 04:28:13.031759 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:41434.service: Deactivated successfully. Sep 4 04:28:13.022024 sshd-session[5865]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:13.035058 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 04:28:13.036333 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Sep 4 04:28:13.040462 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:41448.service - OpenSSH per-connection server daemon (10.0.0.1:41448). Sep 4 04:28:13.041576 systemd-logind[1509]: Removed session 20. Sep 4 04:28:13.096058 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 41448 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:13.098024 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:13.103999 systemd-logind[1509]: New session 21 of user core. Sep 4 04:28:13.112467 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 04:28:13.250934 sshd[5883]: Connection closed by 10.0.0.1 port 41448 Sep 4 04:28:13.252550 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:13.258221 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:41448.service: Deactivated successfully. Sep 4 04:28:13.260674 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 04:28:13.261619 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Sep 4 04:28:13.263916 systemd-logind[1509]: Removed session 21. Sep 4 04:28:13.976097 kubelet[2728]: E0904 04:28:13.976028 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:15.845893 containerd[1612]: time="2025-09-04T04:28:15.845809190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\" id:\"72020bf2dabcf2401c1de04363dd941a29fa600b96fb5c22821e4e529e303b0c\" pid:5910 exited_at:{seconds:1756960095 nanos:845489380}" Sep 4 04:28:18.264042 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:41458.service - OpenSSH per-connection server daemon (10.0.0.1:41458). Sep 4 04:28:18.330375 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 41458 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:18.332560 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:18.339159 systemd-logind[1509]: New session 22 of user core. Sep 4 04:28:18.344528 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 04:28:18.473193 sshd[5926]: Connection closed by 10.0.0.1 port 41458 Sep 4 04:28:18.473626 sshd-session[5923]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:18.478872 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:41458.service: Deactivated successfully. Sep 4 04:28:18.481416 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 04:28:18.482320 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Sep 4 04:28:18.484242 systemd-logind[1509]: Removed session 22. Sep 4 04:28:23.487537 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:54378.service - OpenSSH per-connection server daemon (10.0.0.1:54378). Sep 4 04:28:23.552079 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 54378 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:23.554258 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:23.559509 systemd-logind[1509]: New session 23 of user core. Sep 4 04:28:23.563622 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 04:28:23.740826 sshd[5945]: Connection closed by 10.0.0.1 port 54378 Sep 4 04:28:23.741549 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:23.746906 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:54378.service: Deactivated successfully. Sep 4 04:28:23.749548 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 04:28:23.750612 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Sep 4 04:28:23.752838 systemd-logind[1509]: Removed session 23. Sep 4 04:28:28.756639 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:54382.service - OpenSSH per-connection server daemon (10.0.0.1:54382). Sep 4 04:28:28.815643 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 54382 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:28.816061 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:28.823225 systemd-logind[1509]: New session 24 of user core. Sep 4 04:28:28.828584 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 04:28:28.959556 sshd[5961]: Connection closed by 10.0.0.1 port 54382 Sep 4 04:28:28.959909 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:28.965131 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:54382.service: Deactivated successfully. Sep 4 04:28:28.967697 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 04:28:28.968657 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Sep 4 04:28:28.970408 systemd-logind[1509]: Removed session 24. Sep 4 04:28:33.976649 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:36846.service - OpenSSH per-connection server daemon (10.0.0.1:36846). Sep 4 04:28:34.039435 sshd[5976]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:A6ijjQuz6xgc/K5620kUVf4DFiLJ495e/wtaxjU16lc Sep 4 04:28:34.041511 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:34.047037 systemd-logind[1509]: New session 25 of user core. Sep 4 04:28:34.057503 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 04:28:34.186441 sshd[5979]: Connection closed by 10.0.0.1 port 36846 Sep 4 04:28:34.186906 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:34.193109 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:36846.service: Deactivated successfully. Sep 4 04:28:34.195962 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 04:28:34.197614 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Sep 4 04:28:34.199793 systemd-logind[1509]: Removed session 25. Sep 4 04:28:35.681107 containerd[1612]: time="2025-09-04T04:28:35.680903520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc0c7890369f14d9dcdf10665729ebc5474a667d2fde6b6cecf3bcd09bf15b17\" id:\"940934926b330abc38acb45ad8652742430d13189b2d4f454e24ae03c524b2b6\" pid:6023 exited_at:{seconds:1756960115 nanos:680672296}" Sep 4 04:28:35.737329 containerd[1612]: time="2025-09-04T04:28:35.737222819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3d69584c57a43a51564af14014fed771334ed850ebed69f58bb6c7025f33852\" id:\"520e12954b45b3f960eadc3eab58ae7048e6f0792dbf7a715daadf3c6ed98cd3\" pid:6006 exited_at:{seconds:1756960115 nanos:736801549}"